Publications

Displaying 201 - 300 of 1081
  • Doumas, L. A. A., & Martin, A. E. (2018). Learning structured representations from experience. Psychology of Learning and Motivation, 69, 165-203. doi:10.1016/bs.plm.2018.10.002.

    Abstract

    How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and the relations between them. An account of how we might acquire such representations has central importance for theories of human cognition. We describe how a system can learn structured relational representations from initially unstructured inputs using comparison, sensitivity to time, and a modified Hebbian learning algorithm. We summarize how the model DORA (Discovery of Relations by Analogy) instantiates this approach, which we call predicate learning, as well as how the model captures several phenomena from cognitive development, relational reasoning, and language processing in the human brain. Predicate learning offers a link between models based on formal languages and models which learn from experience and provides an existence proof for how structured representations might be learned in the first place.
  • Drijvers, L., & Trujillo, J. P. (2018). Commentary: Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. Frontiers in Human Neuroscience, 12: 256. doi:10.3389/fnhum.2018.00256.

    Abstract

    A commentary on
    Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration

    by Zhao, W., Riggs, K., Schindler, I., and Holle, H. (2018). J. Neurosci. 10, 1748–1717. doi: 10.1523/JNEUROSCI.1748-17.2017
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Alpha and beta oscillations index semantic congruency between speech and gestures in clear and degraded speech. Journal of Cognitive Neuroscience, 30(8), 1086-1097. doi:10.1162/jocn_a_01301.

    Abstract

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Hearing and seeing meaning in noise: Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension. Human Brain Mapping, 39(5), 2075-2087. doi:10.1002/hbm.23987.

    Abstract

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

    Additional information

    hbm23987-sup-0001-suppinfo01.docx
  • Drijvers, L., & Ozyurek, A. (2018). Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. Brain and Language, 177-178, 7-17. doi:10.1016/j.bandl.2018.01.003.

    Abstract

    Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
  • Drude, S. (2009). Nasal harmony in Awetí ‐ A declarative account. ReVEL - Revista Virtual de Estudos da Linguagem, (3). Retrieved from http://www.revel.inf.br/en/edicoes/?mode=especial&id=16.

    Abstract

    This article describes and analyses nasal harmony (or spreading of nasality) in Awetí. It first shows generally how sounds in prefixes adapt to nasality or orality of stems, and how nasality in stems also ‘extends’ to the left. With abstract templates we show which phonetically nasal or oral sequences are possible in Awetí (focusing on stops, pre-nasalized stops and nasals) and which phonological analysis is appropriate for account for this regularities. In Awetí, there are intrinsically nasal and oral vowels and ‘neutral’ vowels which adapt phonetically to a following vowel or consonant, as is the case of sonorant consonants. Pre-nasalized stops such as “nt” are nasalized variants of stops, not post-oralized variants of nasals as in Tupí-Guaranian languages. For nasals and stops in syllable coda (end of morphemes), we postulate arqui-phonemes which adapt to the preceding vowel or a following consonant. Finally, using a declarative approach, the analysis formulates ‘rules’ (statements) which account for the ‘behavior’ of nasality in Awetí words, making use of “structured sequences” on both the phonetic and phonological levels. So, each unit (syllable, morpheme, word etc.) on any level has three components, a sequence of segments, a constituent structure (where pre-nasalized stops, like diphthongs, correspond to two segments), and an intonation structure. The statements describe which phonetic variants can be combined (concatenated) with which other variants, depending on their nasality or orality.
  • Duarri, A., Meng-Chin, A. L., Fokkens, M. R., Meijer, M., Smeets, C. J. L. M., Nibbeling, E. A. R., Boddeke, E., Sinke, R. J., Kampinga, H. H., Papazian, D. M., & Verbeek, D. S. (2015). Spinocerebellar ataxia type 19/22 mutations alter heterocomplex Kv4.3 channel function and gating in a dominant manner. Cellular and Molecular Life Sciences, 72(17), 3387-3399. doi:10.1007/s00018-015-1894-2.

    Abstract

    The dominantly inherited cerebellar ataxias are a heterogeneous group of neurodegenerative disorders caused by Purkinje cell loss in the cerebellum. Recently, we identified loss-of-function mutations in the KCND3 gene as the cause of spinocerebellar ataxia type 19/22 (SCA19/22), revealing a previously unknown role for the voltage-gated potassium channel, Kv4.3, in Purkinje cell survival. However, how mutant Kv4.3 affects wild-type Kv4.3 channel functioning remains unknown. We provide evidence that SCA19/22-mutant Kv4.3 exerts a dominant negative effect on the trafficking and surface expression of wild-type Kv4.3 in the absence of its regulatory subunit, KChIP2. Notably, this dominant negative effect can be rescued by the presence of KChIP2. We also found that all SCA19/22-mutant subunits either suppress wild-type Kv4.3 current amplitude or alter channel gating in a dominant manner. Our findings suggest that altered Kv4.3 channel localization and/or functioning resulting from SCA19/22 mutations may lead to Purkinje cell loss, neurodegeneration and ataxia.
  • Duffield, N., Matsuo, A., & Roberts, L. (2007). Acceptable ungrammaticality in sentence matching. Second Language Research, 23(2), 155-177. doi:10.1177/0267658307076544.

    Abstract

    This paper presents results from a new set of experiments using the sentence matching paradigm (Forster, Kenneth (1979), Freedman & Forster (1985), also Bley-Vroman & Masterson (1989), investigating native-speakers’ and L2 learners’ knowledge of constraints on clitic placement in French.1 Our purpose is three-fold: (i) to shed more light on the contrasts between native-speakers and L2 learners observed in previous experiments, especially Duffield & White (1999), and Duffield, White, Bruhn de Garavito, Montrul & Prévost (2002); (ii), to address specific criticisms of the sentence-matching paradigm leveled by Gass (2001); (iii), to provide a firm empirical basis for follow-up experiments with L2 learners
  • Duffield, N., Matsuo, A., & Roberts, L. (2009). Factoring out the parallelism effect in VP-ellipsis: English vs. Dutch contrasts. Second Language Research, 25, 427-467. doi:10.1177/0267658309349425.

    Abstract

    Previous studies, including Duffield and Matsuo (2001; 2002; 2009), have demonstrated second language learners’ overall sensitivity to a parallelism constraint governing English VP-ellipsis constructions: like native speakers (NS), advanced Dutch, Spanish and Japanese learners of English reliably prefer ellipsis clauses with structurally parallel antecedents over those with non-parallel antecedents. However, these studies also suggest that, in contrast to English native speakers, L2 learners’ sensitivity to parallelism is strongly influenced by other non-syntactic formal factors, such that the constraint applies in a comparatively restricted range of construction-specific contexts. This article reports a set of follow-up experiments — from both computer-based as well as more traditional acceptability judgement tasks — that systematically manipulates these other factors. Convergent results from these tasks confirm a qualitative difference in the judgement patterns of the two groups, as well as important differences between theoreticians’ judgements and those of typical native speakers. We consider the implications of these findings for theories of ultimate attainment in second language acquisition (SLA), as well as for current theoretical accounts of ellipsis.
  • Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2018). MultiPic: A standardized set of 750 drawings with norms for six European languages. Quarterly Journal of Experimental Psychology, 71(4), 808-816. doi:10.1080/17470218.2017.1310261.

    Abstract

    Numerous studies in psychology, cognitive neuroscience and psycholinguistics have used pictures of objects as stimulus materials. Currently, authors engaged in cross-linguistic work or wishing to run parallel studies at multiple sites where different languages are spoken must rely on rather small sets of black-and-white or colored line drawings. These sets are increasingly experienced as being too limited. Therefore, we constructed a new set of 750 colored pictures of concrete concepts. This set, MultiPic, constitutes a new valuable tool for cognitive scientists investigating language, visual perception, memory and/or attention in monolingual or multilingual populations. Importantly, the MultiPic databank has been normed in six different European languages (British English, Spanish, French, Dutch, Italian and German). All stimuli and norms are freely available at http://www.bcbl.eu/databases/multipic

    Additional information

    http://www.bcbl.eu/databases/multipic
  • Dunn, M., Foley, R., Levinson, S. C., Reesink, G., & Terrill, A. (2007). Statistical reasoning in the evaluation of typological diversity in Island Melanesia. Oceanic Linguistics, 46(2), 388-403.

    Abstract

    This paper builds on a previous work in which we attempted to retrieve a phylogenetic signal using abstract structural features alone, as opposed to cognate sets, drawn from a sample of Island Melanesian languages, both Oceanic (Austronesian) and (non-Austronesian) Papuan (Science 2005[309]: 2072-75 ). Here we clarify a number of misunderstandings of this approach, referring particularly to the critique by Mark Donohue and Simon Musgrave (in this same issue of Oceanic Linguistics), in which they fail to appreciate the statistical principles underlying computational phylogenetic methods. We also present new analyses that provide stronger evidence supporting the hypotheses put forward in our original paper: a reanalysis using Bayesian phylogenetic inference demonstrates the robustness of the data and methods, and provides a substantial improvement over the parsimony method used in our earlier paper. We further demonstrate, using the technique of spatial autocorrelation, that neither proximity nor Oceanic contact can be a major determinant of the pattern of structural variation of the Papuan languages, and thus that the phylogenetic relatedness of the Papuan languages remains a serious hypothesis.
  • Dunn, M. (2009). Contact and phylogeny in Island Melanesia. Lingua, 11(11), 1664-1678. doi:10.1016/j.lingua.2007.10.026.

    Abstract

    This paper shows that despite evidence of structural convergence between some of the Austronesian and non-Austronesian (Papuan) languages of Island Melanesia, statistical methods can detect two independent genealogical signals derived from linguistic structural features. Earlier work by the author and others has presented a maximum parsimony analysis which gave evidence for a genealogical connection between the non-Austronesian languages of island Melanesia. Using the same data set, this paper demonstrates for the non-statistician the application of more sophisticated statistical techniques—including Bayesian methods of phylogenetic inference, and shows that the evidence for common ancestry is if anything stronger than originally supposed.
  • Dunn, M., Margetts, A., Meira, S., & Terrill, A. (2007). Four languages from the lower end of the typology of locative predication. Linguistics, 45, 873-892. doi:10.1515/LING.2007.026.

    Abstract

    As proposed by Ameka and Levinson (this issue) locative verb systems can be classified into four types according to the number of verbs distinguished. This article addresses the lower extreme of this typology: languages which offer no choice of verb in the basic locative function (BLF). These languages have either a single locative verb, or do not use verbs at all in the basic locative construction (BLC, the construction used to encode the BLF). A close analysis is presented of the behavior of BLF predicate types in four genetically diverse languages: Chukchi (Chukotko-Kamchatkan, Russian Arctic), and Lavukaleve (Papuan isolate, Solomon Islands), which have BLC with the normal copula/existential verb for the language; Tiriyó (Cariban/Taranoan, Brazil), which has an optional copula in the BLC; and Saliba (Austronesian/Western Oceanic, Papua New Guinea), a language with a verbless clause as the BLC. The status of these languages in the typology of positional verb systems is reviewed, and other relevant typological generalizations are discussed
  • Dunn, M., & Ross, M. (2007). Is Kazukuru really non-Austronesian? Oceanic Linguistics, 46(1), 210-231. doi:10.1353/ol.2007.0018.

    Abstract

    Kazukuru is an extinct language, originally spoken in the inland of the western part of the island of New Georgia, Solomon Islands, and attested by very limited historical sources. Kazukuru has generally been considered to be a Papuan, that is, non-Austronesian, language, mostly on the basis of its lexicon. Reevaluation of the available data suggests a high likelihood that Kazukuru was in fact an Oceanic Austronesian language. Pronominal paradigms are clearly of Austronesian origin, and many other aspects of language structured retrievable from the limited data are also congruent with regional Oceanic Austronesian typology. The extent and possible causes of Kazukuru lexical deviations from the Austronesian norm are evaluated and discussed.
  • Edlinger, G., Bastiaansen, M. C. M., Brunia, C., Neuper, C., & Pfurtscheller, G. (1999). Cortical oscillatory activity assessed by combined EEG and MEG recordings and high resolution ERD methods. Biomedizinische Technik, 44(2), 131-134.
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior Research Methods, 50(3), 1102-1115. doi:10.3758/s13428-017-0929-z.

    Abstract

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. The use of this variant of the visual world paradigm has shown that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional (2D) stimuli that are mere abstractions of real world objects. Here we present a visual world paradigm study in a three-dimensional (3D) immersive virtual reality environment. Despite significant changes in the stimulus material and the different mode of stimulus presentation, language-mediated anticipatory eye movements were observed. These findings thus indicate prediction of upcoming words in language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eye-tracking in rich and multimodal 3D virtual environments.

    Additional information

    13428_2017_929_MOESM1_ESM.docx
  • Eimer, M., Kiss, M., Press, C., & Sauter, D. (2009). The roles of feature-specific task set and bottom-up salience in attentional capture: An ERP study. Journal of Experimental Psychology: Human Perception and Performance, 35, 1316-1328. doi:10.1037/a0015872.

    Abstract

    We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. ERPs and behavioural performance were measured in two experiments where spatially nonpredictive cues preceded visual search arrays that included a colour-defined target. When cue arrays contained a target-colour singleton, behavioural spatial cueing effects were accompanied by a cue-induced N2pc component, indicative of attentional capture. Behavioural cueing effects and N2pc components were only minimally attenuated for non-singleton relative to singleton target-colour cues, demonstrating that top-down task set has a much greater impact on attentional capture than bottom-up salience. For nontarget-colour singleton cues, no N2pc was triggered, but an anterior N2 component indicative of top-down inhibition was observed. In Experiment 2, these cues produced an inverted behavioural cueing effect, which was accompanied by a delayed N2pc to targets presented at cued locations. These results suggest that perceptually salient visual stimuli without task-relevant features trigger a transient location-specific inhibition process that prevents attentional capture, but delays the selection of subsequent target events.
  • Enard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J. and 36 moreEnard, W., Gehre, S., Hammerschmidt, K., Hölter, S. M., Blass, T., Somel, M., Brückner, M. K., Schreiweis, C., Winter, C., Sohr, R., Becker, L., Wiebe, V., Nickel, B., Giger, T., Müller, U., Groszer, M., Adler, T., Aguilar, A., Bolle, I., Calzada-Wack, J., Dalke, C., Ehrhardt, N., Favor, J., Fuchs, H., Gailus-Durner, V., Hans, W., Hölzlwimmer, G., Javaheri, A., Kalaydjiev, S., Kallnik, M., Kling, E., Kunder, S., Moßbrugger, I., Naton, B., Racz, I., Rathkolb, B., Rozman, J., Schrewe, A., Busch, D. H., Graw, J., Ivandic, B., Klingenspor, M., Klopstock, T., Ollert, M., Quintanilla-Martinez, L., Schulz, H., Wolf, E., Wurst, W., Zimmer, A., Fisher, S. E., Morgenstern, R., Arendt, T., Hrabé de Angelis, M., Fischer, J., Schwarz, J., & Pääbo, S. (2009). A humanized version of Foxp2 affects cortico-basal ganglia circuits in mice. Cell, 137(5), 961-971. doi:10.1016/j.cell.2009.03.041.

    Abstract

    It has been proposed that two amino acid substitutions in the transcription factor FOXP2 have been positively selected during human evolution due to effects on aspects of speech and language. Here, we introduce these substitutions into the endogenous Foxp2 gene of mice. Although these mice are generally healthy, they have qualitatively different ultrasonic vocalizations, decreased exploratory behavior and decreased dopamine concentrations in the brain suggesting that the humanized Foxp2 allele affects basal ganglia. In the striatum, a part of the basal ganglia affected in humans with a speech deficit due to a nonfunctional FOXP2 allele, we find that medium spiny neurons have increased dendrite lengths and increased synaptic plasticity. Since mice carrying one nonfunctional Foxp2 allele show opposite effects, this suggests that alterations in cortico-basal ganglia circuits might have been important for the evolution of speech and language in humans.
  • Enfield, N. J., Kita, S., & De Ruiter, J. P. (2007). Primary and secondary pragmatic functions of pointing gestures. Journal of Pragmatics, 39(10), 1722-1741. doi:10.1016/j.pragma.2007.03.001.

    Abstract

    This article presents a study of a set of pointing gestures produced together with speech in a corpus of video-recorded “locality description” interviews in rural Laos. In a restricted set of the observed gestures (we did not consider gestures with special hand shapes, gestures with arc/tracing motion, or gestures directed at referents within physical reach), two basic formal types of pointing gesture are observed: B-points (large movement, full arm, eye gaze often aligned) and S-points (small movement, hand only, casual articulation). Taking the approach that speech and gesture are structurally integrated in composite utterances, we observe that these types of pointing gesture have distinct pragmatic functions at the utterance level. One type of gesture (usually “big” in form) carries primary, informationally foregrounded information (for saying “where” or “which one”). Infants perform this type of gesture long before they can talk. The second type of gesture (usually “small” in form) carries secondary, informationally backgrounded information which responds to a possible but uncertain lack of referential common ground. We propose that the packaging of the extra locational information into a casual gesture is a way of adding extra information to an utterance without it being on-record that the added information was necessary. This is motivated by the conflict between two general imperatives of communication in social interaction: a social-affiliational imperative not to provide more information than necessary (“Don’t over-tell”), and an informational imperative not to provide less information than necessary (“Don’t under-tell”).
  • Enfield, N. J. (2009). Common tragedy [Review of the book The native mind and the cultural construction of nature by Scott Atran Douglas Medin]. The Times Literary Supplement, September 18,2009, 10-11.
  • Enfield, N. J. (2009). [Review of the book Serial verb constructions: A cross-linguistic typology ed. by Alexandra Y. Aikhenvald and R. M. W. Dixon]. Language, 85, 445-451. doi:10.1353/lan.0.0124.
  • Enfield, N. J. (2007). Encoding three-participant events in the Lao clause. Linguistics, 45(3), 509-538. doi:10.1515/LING.2007.016.

    Abstract

    Any language will have a range of predicates that specify three core participants (e.g. 'put', 'show', 'give'), and will conventionally provide a range of constructional types for the expression of these three participants in a structured single-clause or single-sentence event description. This article examines the clausal encoding of three-participant events in Lao, a Tai language of Southeast Asia. There is no possibility in Lao for expression of three full arguments in the core of a single-verb clause (although it is possible to have a third argument in a noncore slot, marked as oblique with a prepositionlike element). Available alternatives include extraposing an argument using a topic-comment construction, incorporating an argument into the verb phrase, and ellipsing one or more contextually retrievable arguments. A more common strategy is verb serialization, for example, where a threeplace verb (e.g. 'put') is assisted by an additional verb (typically a verb of handling such as 'carry') that provides a slot for the theme argument (e.g. the transferred object in a putting scene). The event construal encoded by this type of structure decomposes the event into a first stage in which the agent comes into control over a theme, and a second in which the agent performs a controlled action (e.g. of transfer) with respect to that theme and a goal (and/or source). The particular set of strategies that Lao offers for encoding three-participant events — notably, topic-comment strategy, ellipsis strategy, serial verb strategy — conform with (and are presumably motivated by) the general typological profile of the language. The typological features of Lao are typical for the mainland Southeast Asia area (isolating, topic-prominent, verb-serializing, widespread nominal ellipsis).
  • Enfield, N. J. (2007). [Comment on 'Agency' by Paul Kockelman]. Current Anthropology, 48(3), 392-392. doi:10.1086/512998.
  • Enfield, N. J. (2007). [review of the book Ethnopragmatics: Understanding discourse in cultural context ed. by Cliff Goddard]. Intercultural Pragmatics, 4(3), 419-433. doi:10.1515/IP.2007.021.
  • Enfield, N. J. (2007). Lao separation verbs and the logic of linguistic event categorization. Cognitive Linguistics, 18(2), 287-296. doi:10.1515/COG.2007.016.

    Abstract

    While there are infinite conceivable events of material separation, those actually encoded in the conventions of a given language's verb semantics number only a few. Furthermore, there appear to be crosslinguistic parallels in the native verbal analysis of this conceptual domain. What are the operative distinctions, and why these? This article analyses a key subset of the bivalent (transitive) verbs of cutting and breaking in Lao. I present a decompositional analysis of the verbs glossed 'cut (off)', 'cut.into.with.placed.blade', 'cut.into.with.moving.blade', and 'snap', pursuing the idea that the attested combinations of sub-events have a natural logic to them. Consideration of the nature of linguistic categories, as distinct from categories in general, suggests that the attested distinctions must have ethnographic and social interactional significance, raising new lines of research for cognitive semantics.
  • Enfield, N. J. (2009). Language: Social motives for syntax [Review of the book Origins of human communication by Michael Tomasello]. Science, 324(5923), 39. doi:10.1126/science.1172660.
  • Enfield, N. J. (2015). Linguistic relativity from reference to agency. Annual Review of Anthropology, 44, 207-224. doi:10.1146/annurev-anthro-102214-014053.

    Abstract

    How are language, thought, and reality related? Interdisciplinary research on this question over the past two decades has made significant progress. Most of the work has been Neo-Whorfian in two senses: One, it has been driven by research questions that were articulated most explicitly and most famously by the linguistic anthropologist Benjamin Lee Whorf, and two, it has limited the scope of inquiry to Whorf's narrow interpretations of the key terms “language,” “thought,” and “reality.” This article first reviews some of the ideas and results of Neo-Whorfian work, concentrating on the special role of linguistic categorization in heuristic decision making. It then considers new and potential directions in work on linguistic relativity, taken broadly to mean the ways in which the perspective offered by a given language can affect thought (or mind) and reality. New lines of work must reconsider the idea of linguistic relativity by exploring the range of available interpretations of the key terms: in particular, “language” beyond reference, “thought” beyond nonsocial processing, and “reality” beyond brute, nonsocial facts.
  • Enfield, N. J. (1999). On the indispensability of semantics: Defining the ‘vacuous’. Rask: internationalt tidsskrift for sprog og kommunikation, 9/10, 285-304.
  • Enfield, N. J. (2015). Other-initiated repair in Lao. Open linguistics, 1(1), 119-144. doi:10.2478/opli-2014-0006.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with otherinitiated repair, as observed in a corpus of video-recorded conversation in the Lao language (a Southwestern Tai language spoken in Laos, Thailand, and Cambodia). The article reports findings specific to the Lao language from the comparative project that is the topic of this special issue. While the scope is general to the overall pattern of other-initiated repair as a set of practices and a system of semiotic resources, special attention is given to (1) the range of repair operations that are elicited by open other-initiators of repair in Lao, especially the subtle changes made when problem turns are repeated, and (2) the use of phrase-final particles—a characteristic feature of Lao grammar—in the marking of both other-initiations of repair and repair solution turns
  • Enfield, N. J., & Diffloth, G. (2009). Phonology and sketch grammar of Kri, a Vietic language of Laos. Cahiers de Linguistique - Asie Orientale (CLAO), 38(1), 3-69.
  • Enfield, N. J. (2009). Relationship thinking and human pragmatics. Journal of Pragmatics, 41, 60-78. doi:10.1016/j.pragma.2008.09.007.

    Abstract

    The approach to pragmatics explored in this article focuses on elements of social interaction which are of universal relevance, and which may provide bases for a comparative approach. The discussion is anchored by reference to a fragment of conversation from a video-recording of Lao speakers during a home visit in rural Laos. The following points are discussed. First, an understanding of the full richness of context is indispensable for a proper understanding of any interaction. Second, human relationships are a primary locus of social organization, and as such constitute a key focus for pragmatics. Third, human social intelligence forms a universal cognitive under-carriage for interaction, and requires careful cross-cultural study. Fourth, a neo-Peircean framework for a general understanding of semiotic processes gives us a way of stepping away from language as our basic analytical frame. It is argued that in order to get a grip on pragmatics across human groups, we need to take a comparative approach in the biological sense—i.e. with reference to other species as well. From this perspective, human pragmatics is about using semiotic resources to try to meet goals in the realm of social relationships.
  • Erard, M. (2009). How Many Languages? Linguists Discover New Tongues in China. Science, 324(5925), 332-333. doi:10.1126/science.324.5925.332a.
  • Erard, M. (2015). What's in a name? Science, 347(6225), 941-943. doi:10.1126/science.347.6225.941.
  • Ergin, R., Meir, I., Ilkbasaran, D., Padden, C., & Jackendoff, R. (2018). The Development of Argument Structure in Central Taurus Sign Language. Sign Language & Linguistics, 18(4), 612-639. doi:10.1353/sls.2018.0018.

    Abstract

    One of the fundamental issues for a language is its capacity to express
    argument structure unambiguously. This study presents evidence
    for the emergence and the incremental development of these
    basic mechanisms in a newly developing language, Central Taurus
    Sign Language. Our analyses identify universal patterns in both the
    emergence and development of these mechanisms and in languagespecific
    trajectories.
  • Ernestus, M., Van Mulken, M., & Baayen, R. H. (2007). Ridders en heiligen in tijd en ruimte: Moderne stylometrische technieken toegepast op Oud-Franse teksten. Taal en Tongval, 58, 1-83.

    Abstract

    This article shows that Old-French literary texts differ systematically in their relative frequencies of syntactic constructions. These frequencies reflect differences in register (poetry versus prose), region (Picardy, Champagne, and Esatern France), time period (until 1250, 1251 – 1300, 1301 – 1350), and genre (hagiography, romance of chivalry, or other).
  • Ernestus, M., & Baayen, R. H. (2007). Paradigmatic effects in auditory word recognition: The case of alternating voice in Dutch. Language and Cognitive Processes, 22(1), 1-24. doi:10.1080/01690960500268303.

    Abstract

    Two lexical decision experiments addressed the role of paradigmatic effects in auditory word recognition. Experiment 1 showed that listeners classified a form with an incorrectly voiced final obstruent more readily as a word if the obstruent is realised as voiced in other forms of that word's morphological paradigm. Moreover, if such was the case, the exact probability of paradigmatic voicing emerged as a significant predictor of the response latencies. A greater probability of voicing correlated with longer response latencies for words correctly realised with voiceless final obstruents. A similar effect of this probability was observed in Experiment 2 for words with completely voiceless or weakly voiced (incompletely neutralised) final obstruents. These data demonstrate the relevance of paradigmatically related complex words for the processing of morphologically simple words in auditory word recognition.
  • Ernestus, M., & Cutler, A. (2015). BALDEY: A database of auditory lexical decisions. Quarterly Journal of Experimental Psychology, 68, 1469-1488. doi:10.1080/17470218.2014.984730.

    Abstract

    In an auditory lexical decision experiment, 5,541 spoken content words and pseudo-words were presented to 20 native speakers of Dutch. The words vary in phonological makeup and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudo-words were matched in these respects to the real words. The BALDEY data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbors, and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles and frequency ratings by 70 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.
  • Ernestus, M., Hanique, I., & Verboom, E. (2015). The effect of speech situation on the occurrence of reduced word pronunciation variants. Journal of Phonetics, 48, 60-75. doi:10.1016/j.wocn.2014.08.001.

    Abstract

    This article presents two studies investigating how the situation in which speech is uttered affects the frequency with which words are reduced. Study 1 is based on the Spoken Dutch Corpus, which consists of 15 components, nearly all representing a different speech situation. This study shows that the components differ in how often ten semantically weak words are highly reduced. The differences are especially large between the components with scripted and unscripted speech. Within the component group of unscripted speech, the formality of the situation shows an effect. Study 2 investigated segment reduction in a shadowing experiment in which participants repeated Dutch carefully and casually articulated sentences. Prefixal schwa and suffixal /t/ were absent in participants' responses to both sentences types as often as in formal interviews. If a segment was absent, this appeared to be mostly due to extreme co-articulation, unlike in speech produced in less formal situations. Speakers thus adapted more to the formal situation of the experiment than to the stimuli to be shadowed. We conclude that speech situation affects the occurrence of reduced word pronunciation variants, which should be accounted for by psycholinguistic models of speech production and comprehension
  • Essegbey, J., & Ameka, F. K. (2007). "Cut" and "break" verbs in Gbe and Sranan. Journal of Pidgin and Creole Languages, 22(1), 37-55. doi:10.1075/jpcl.22.1.04ess.

    Abstract

    This paper compares “cut” and “break” verbs in four variants of Gbe, namely Anfoe, Anlo, Fon and Ayizo, with those of Sranan. “Cut” verbs are change-of-state verbs that co-lexicalize the type of action that brings about a change, the type of instrument or instrument part, and the manner in which a change occurs. By contrast, break verbs co-lexicalize either the type of object or the type of change. It has been hypothesized that “cut”-verbs are unergative while breaks verbs are unaccusatives. For example “break” verbs participate in the causative alternation constructions but “cut” verbs don’t. We show that although there are some differences in the meanings of “cut” and break verbs across the Gbe languages, significant generalizations can be made with regard to their lexicalization patterns. By contrast, the meanings of “cut” and break verbs in Sranan are closer to those of their etymons in English and Dutch. However, despite the differences in the meanings of “cut” and “break” verbs between the Gbe languages and Sranan, the syntax of the verbs in Sranan is similar to that of the Eastern Gbe variants, namely Fon and Ayizo. We look at the implications of our findings for the relexification hypothesis. (copyright Benjamins)
  • Estruch, S. B., Graham, S. A., Quevedo, M., Vino, A., Dekkers, D. H. W., Deriziotis, P., Sollis, E., Demmers, J., Poot, R. A., & Fisher, S. E. (2018). Proteomic analysis of FOXP proteins reveals interactions between cortical transcription factors associated with neurodevelopmental disorders. Human Molecular Genetics, 27(7), 1212-1227. doi:10.1093/hmg/ddy035.

    Abstract

    FOXP transcription factors play important roles in neurodevelopment, but little is known about how their transcriptional activity is regulated. FOXP proteins cooperatively regulate gene expression by forming homo- and hetero-dimers with each other. Physical associations with other transcription factors might also modulate the functions of FOXP proteins. However, few FOXP-interacting transcription factors have been identified so far. Therefore, we sought to discover additional transcription factors that interact with the brain-expressed FOXP proteins, FOXP1, FOXP2 and FOXP4, through affinity-purifications of protein complexes followed by mass spectrometry. We identified seven novel FOXP-interacting transcription factors (NR2F1, NR2F2, SATB1, SATB2, SOX5, YY1 and ZMYM2), five of which have well-established roles in cortical development. Accordingly, we found that these transcription factors are co-expressed with FoxP2 in the deep layers of the cerebral cortex and also in the Purkinje cells of the cerebellum, suggesting that they may cooperate with the FoxPs to regulate neural gene expression in vivo. Moreover, we demonstrated that etiological mutations of FOXP1 and FOXP2, known to cause neurodevelopmental disorders, severely disrupted the interactions with FOXP-interacting transcription factors. Additionally, we pinpointed specific regions within FOXP2 sequence involved in mediating these interactions. Thus, by expanding the FOXP interactome we have uncovered part of a broader neural transcription factor network involved in cortical development, providing novel molecular insights into the transcriptional architecture underlying brain development and neurodevelopmental disorders.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement I: Framework and initial exemplification. Language and Cognition, 10, 110-140. doi:10.1017/langcog.2017.21.

    Abstract

    Human language offers rich ways to track, compare, and engage the attentional and epistemic states of interlocutors. While this task is central to everyday communication, our knowledge of the cross-linguistic grammatical means that target such intersubjective coordination has remained basic. In two serialised papers, we introduce the term ‘engagement’ to refer to grammaticalised means for encoding the relative mental directedness of speaker and addressee towards an entity or state of affairs, and describe examples of engagement systems from around the world. Engagement systems express the speaker’s assumptions about the degree to which their attention or knowledge is shared (or not shared) by the addressee. Engagement categories can operate at the level of entities in the here-and-now (deixis), in the unfolding discourse (definiteness vs indefiniteness), entire event-depicting propositions (through markers with clausal scope), and even metapropositions (potentially scoping over evidential values). In this first paper, we introduce engagement and situate it with respect to existing work on intersubjectivity in language. We then explore the key role of deixis in coordinating attention and expressing engagement, moving through increasingly intercognitive deictic systems from those that focus on the the location of the speaker, to those that encode the attentional state of the addressee.
  • Evans, N., Bergqvist, H., & San Roque, L. (2018). The grammar of engagement II: Typology and diachrony. Language and Cognition, 10(1), 141-170. doi:10.1017/langcog.2017.22.

    Abstract

    Engagement systems encode the relative accessibility of an entity or state of affairs to the speaker and addressee, and are thus underpinned by our social cognitive capacities. In our first foray into engagement (Part 1), we focused on specialised semantic contrasts as found in entity-level deictic systems, tailored to the primal scenario for establishing joint attention. This second paper broadens out to an exploration of engagement at the level of events and even metapropositions, and comments on how such systems may evolve. The languages Andoke and Kogi demonstrate what a canonical system of engagement with clausal scope looks like, symmetrically assigning ‘knowing’ and ‘unknowing’ values to speaker and addressee. Engagement is also found cross-cutting other epistemic categories such as evidentiality, for example where a complex assessment of relative speaker and addressee awareness concerns the source of information rather than the proposition itself. Data from the language Abui reveal that one way in which engagement systems can develop is by upscoping demonstratives, which normally denote entities, to apply at the level of events. We conclude by stressing the need for studies that focus on what difference it makes, in terms of communicative behaviour, for intersubjective coordination to be managed by engagement systems as opposed to other, non-grammaticalised means.
  • Evans, N., & Levinson, S. C. (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32(5), 429-492. doi:10.1017/S0140525X0999094X.

    Abstract

    Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of “universal,” we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition.
  • Evans, N., & Levinson, S. C. (2009). With diversity in mind: Freeing the language sciences from universal grammar [Author's response]. Behavioral and Brain Sciences, 32(5), 472-484. doi:10.1017/S0140525X09990525.

    Abstract

    Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing crosslinguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intentionattributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.
  • Everett, D., & Majid, A. (2009). Adventures in the jungle of language. [Interview by Asifa Majid and Jon Sutton.]. The Psychologist, 22(4), 312-313. Retrieved from http://www.thepsychologist.org.uk/archive/archive_home.cfm?volumeID=22&editionID=174&ArticleID=1494.

    Abstract

    Daniel Everett has spent his career in the Amazon, challenging some fundamental ideas about language and thought. Asifa Majid and Jon Sutton pose the questions
  • Everett, C., Blasi, D. E., & Roberts, S. G. (2015). Climate, vocal folds, and tonal languages: Connecting the physiological and geographic dots. Proceedings of the National Academy of Sciences of the United States of America, 112, 1322-1327. doi:10.1073/pnas.1417413112.

    Abstract

    We summarize a number of findings in laryngology demonstrating that perturbations of phonation, including increased jitter and shimmer, are associated with desiccated ambient air. We predict that, given the relative imprecision of vocal fold vibration in desiccated versus humid contexts, arid and cold ecologies should be less amenable, when contrasted to warm and humid ecologies, to the development of languages with phonemic tone, especially complex tone. This prediction is supported by data from two large independently coded databases representing 3,700+ languages. Languages with complex tonality have generally not developed in very cold or otherwise desiccated climates, in accordance with the physiologically based predictions. The predicted global geographic–linguistic association is shown to operate within continents, within major language families, and across language isolates. Our results offer evidence that human sound systems are influenced by environmental factors.
  • Eysenck, M. W., & Van Berkum, J. J. A. (1992). Trait anxiety, defensiveness, and the structure of worry. Personality and Individual Differences, 13(12), 1285-1290. Retrieved from http://www.sciencedirect.com/science//journal/01918869.

    Abstract

    A principal components analysis of the ten scales of the Worry Questionnaire revealed the existence of major worry factors or domains of social evaluation and physical threat, and these factors were confirmed in a subsequent item analysis. Those high in trait anxiety had much higher scores on the Worry Questionnaire than those low in trait anxiety, especially on those scales relating to social evaluation. Scores on the Marlowe-Crowne Social Desirability Scale were negatively related to worry frequency. However, groups of low-anxious and repressed individucores did not differ in worry. It was concluded that worry, especals formed on the basis of their trait anxiety and social desirability sially in the social evaluation domain, is of fundamental importance to trait anxiety.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37, 1-9. doi:10.3758/MC.37.1.1.

    Abstract

    In this study, we investigate whether language and music share cognitive resources for structural processing. We report an experiment that used sung materials and manipulated linguistic complexity (subject-extracted relative clauses, object-extracted relative clauses) and musical complexity (in-key critical note, out-of-key critical note, auditory anomaly on the critical note involving a loudness increase). The auditory-anomaly manipulation was included in order to test whether the difference between in-key and out-of-key conditions might be due to any salient, unexpected acoustic event. The critical dependent measure involved comprehension accuracies to questions about the propositional content of the sentences asked at the end of each trial. The results revealed an interaction between linguistic and musical complexity such that the difference between the subject- and object-extracted relative clause conditions was larger in the out-of-key condition than in the in-key and auditory-anomaly conditions. These results provide evidence for an overlap in structural processing between language and music.
  • Felemban, D., Verdonschot, R. G., Iwamoto, Y., Uchiyama, Y., Kakimoto, N., Kreiborg, S., & Murakami, S. (2018). A quantitative experimental phantom study on MRI image uniformity. Dentomaxillofacial Radiology, 47(6): 20180077. doi:10.1259/dmfr.20180077.

    Abstract

    Objectives: Our goal was to assess MR image uniformity by investigating aspects influencing said uniformity via a method laid out by the National Electrical Manufacturers Association (NEMA).
    Methods: Six metallic materials embedded in a glass phantom were scanned (i.e. Au, Ag, Al, Au-Ag-Pd alloy, Ti and Co-Cr alloy) as well as a reference image. Sequences included spin echo (SE) and gradient echo (GRE) scanned in three planes (i.e. axial, coronal, and sagittal). Moreover, three surface coil types (i.e. head and neck, Brain, and temporomandibular joint coils) and two image correction methods (i.e. surface coil intensity correction or SCIC, phased array uniformity enhancement or PURE) were employed to evaluate their effectiveness on image uniformity. Image uniformity was assessed using the National Electrical Manufacturers Association peak-deviation non-uniformity method.
    Results: Results showed that temporomandibular joint coils elicited the least uniform image and brain coils outperformed head and neck coils when metallic materials were present. Additionally, when metallic materials were present, spin echo outperformed gradient echo especially for Co-Cr (particularly in the axial plane). Furthermore, both SCIC and PURE improved image uniformity compared to uncorrected images, and SCIC slightly surpassed PURE when metallic metals were present. Lastly, Co-Cr elicited the least uniform image while other metallic materials generally showed similar patterns (i.e. no significant deviation from images without metallic metals).
    Conclusions: Overall, a quantitative understanding of the factors influencing MR image uniformity (e.g. coil type, imaging method, metal susceptibility, and post-hoc correction method) is advantageous to optimize image quality, assists clinical interpretation, and may result in improved medical and dental care.
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Felser, C., & Roberts, L. (2007). Processing wh-dependencies in a second language: A cross-modal priming study. Second Language Research, 23(1), 9-36. doi:10.1177/0267658307071600.

    Abstract

    This study investigates the real-time processing of wh-dependencies by advanced Greek-speaking learners of English using a cross-modal picture priming task. Participants were asked to respond to different types of picture target presented either at structurally defined gap positions, or at pre-gap control positions, while listening to sentences containing indirect-object relative clauses. Our results indicate that the learners processed the experimental sentences differently from both adult native speakers of English and monolingual English-speaking children. Contrary to what has been found for native speakers, the learners' response pattern was not influenced by individual working memory differences. Adult second language learners differed from native speakers with a relatively high reading or listening span in that they did not show any evidence of structurally based antecedent reactivation at the point of the indirect object gap. They also differed from low-span native speakers, however, in that they showed evidence of maintained antecedent activation during the processing of the experimental sentences. Whereas the localized priming effect observed in the high-span controls is indicative of trace-based antecedent reactivation in native sentence processing, the results from the Greek-speaking learners support the hypothesis that the mental representations built during non-native language processing lack abstract linguistic structure such as movement traces.
  • Fisher, S. E., Stein, J. F., & Monaco, A. P. (1999). A genome-wide search strategy for identifying quantitative trait loci involved in reading and spelling disability (developmental dyslexia). European Child & Adolescent Psychiatry, 8(suppl. 3), S47-S51. doi:10.1007/PL00010694.

    Abstract

    Family and twin studies of developmental dyslexia have consistently shown that there is a significant heritable component for this disorder. However, any genetic basis for the trait is likely to be complex, involving reduced penetrance, phenocopy, heterogeneity and oligogenic inheritance. This complexity results in reduced power for traditional parametric linkage analysis, where specification of the correct genetic model is important. One strategy is to focus on large multigenerational pedigrees with severe phenotypes and/or apparent simple Mendelian inheritance, as has been successfully demonstrated for speech and language impairment. This approach is limited by the scarcity of such families. An alternative which has recently become feasible due to the development of high-throughput genotyping techniques is the analysis of large numbers of sib-pairs using allele-sharing methodology. This paper outlines our strategy for conducting a systematic genome-wide search for genes involved in dyslexia in a large number of affected sib-pair familites from the UK. We use a series of psychometric tests to obtain different quantitative measures of reading deficit, which should correlate with different components of the dyslexia phenotype, such as phonological awareness and orthographic coding ability. This enable us to use QTL (quantitative trait locus) mapping as a powerful tool for localising genes which may contribute to reading and spelling disability.
  • Fisher, S. E., Marlow, A. J., Lamb, J., Maestrini, E., Williams, D. F., Richardson, A. J., Weeks, D. E., Stein, J. F., & Monaco, A. P. (1999). A quantitative-trait locus on chromosome 6p influences different aspects of developmental dyslexia. American Journal of Human Genetics, 64(1), 146-156. doi:10.1086/302190.

    Abstract

    Recent application of nonparametric-linkage analysis to reading disability has implicated a putative quantitative-trait locus (QTL) on the short arm of chromosome 6. In the present study, we use QTL methods to evaluate linkage to the 6p25-21.3 region in a sample of 181 sib pairs from 82 nuclear families that were selected on the basis of a dyslexic proband. We have assessed linkage directly for several quantitative measures that should correlate with different components of the phenotype, rather than using a single composite measure or employing categorical definitions of subtypes. Our measures include the traditional IQ/reading discrepancy score, as well as tests of word recognition, irregular-word reading, and nonword reading. Pointwise analysis by means of sib-pair trait differences suggests the presence, in 6p21.3, of a QTL influencing multiple components of dyslexia, in particular the reading of irregular words (P=.0016) and nonwords (P=.0024). A complementary statistical approach involving estimation of variance components supports these findings (irregular words, P=.007; nonwords, P=.0004). Multipoint analyses place the QTL within the D6S422-D6S291 interval, with a peak around markers D6S276 and D6S105 consistently identified by approaches based on trait differences (irregular words, P=.00035; nonwords, P=.0035) and variance components (irregular words, P=.007; nonwords, P=.0038). Our findings indicate that the QTL affects both phonological and orthographic skills and is not specific to phoneme awareness, as has been previously suggested. Further studies will be necessary to obtain a more precise localization of this QTL, which may lead to the isolation of one of the genes involved in developmental dyslexia.
  • Fisher, S. E., & Scharff, C. (2009). FOXP2 as a molecular window into speech and language [Review article]. Trends in Genetics, 25, 166-177. doi:10.1016/j.tig.2009.03.002.

    Abstract

    Rare mutations of the FOXP2 transcription factor gene cause a monogenic syndrome characterized by impaired speech development and linguistic deficits. Recent genomic investigations indicate that its downstream neural targets make broader impacts on common language impairments, bridging clinically distinct disorders. Moreover, the striking conservation of both FoxP2 sequence and neural expression in different vertebrates facilitates the use of animal models to study ancestral pathways that have been recruited towards human speech and language. Intriguingly, reduced FoxP2 dosage yields abnormal synaptic plasticity and impaired motor-skill learning in mice, and disrupts vocal learning in songbirds. Converging data indicate that Foxp2 is important for modulating the plasticity of relevant neural circuits. This body of research represents the first functional genetic forays into neural mechanisms contributing to human spoken language.
  • Fisher, S. E., & Vernes, S. C. (2015). Genetics and the Language Sciences. Annual Review of Linguistics, 1, 289-310. doi:10.1146/annurev-linguist-030514-125024.

    Abstract

    Theories addressing the biological basis of language must be built on
    an appreciation of the ways that molecular and neurobiological substrates
    can contribute to aspects of human cognition. Here, we lay out
    the principles by which a genome could potentially encode the necessary
    information to produce a language-ready brain. We describe
    what genes are; how they are regulated; and how they affect the formation,
    function, and plasticity of neuronal circuits. At each step,
    we give examples of molecules implicated in pathways that are important
    for speech and language. Finally, we discuss technological advances
    in genomics that are revealing considerable genotypic variation in
    the human population, from rare mutations to common polymorphisms,
    with the potential to relate this variation to natural variability
    in speech and language skills. Moving forward, an interdisciplinary
    approach to the language sciences, integrating genetics, neurobiology,
    psychology, and linguistics, will be essential for a complete understanding
    of our unique human capacities.
  • Fisher, S. E. (2007). Molecular windows into speech and language disorders. Folia Phoniatrica et Logopaedica, 59, 130-140. doi:10.1159/000101771.

    Abstract

    Why do some children fail to acquire speech and language skills despite adequate environmental input and overtly normal neurological and anatomical development? It has been suspected for several decades, based on indirect evidence, that the human genome might hold some answers to this enigma. These suspicions have recently received dramatic confirmation with the discovery of specific genetic changes which appear sufficient to derail speech and language development. Indeed, researchers are already using information from genetic studies to aid early diagnosis and to shed light on the neural pathways that are perturbed in these inherited forms of speech and language disorder. Thus, we have entered an exciting era for dissecting the neural bases of human communication, one which takes genes and molecules as a starting point. In the current article I explain how this recent paradigm shift has occurred and describe the new vistas that have opened up. I demonstrate ways of bridging the gaps between molecules, neurons and the brain, which will provide a new understanding of the aetiology of speech and language impairments.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • FitzPatrick, I. (2007). Effects of sentence context in L2 natural speech comprehension. Nijmegen CNS, 2, 43-56.

    Abstract

    Electrophysiological studies consistently find N400 effects of semantic incongruity in non-native written language comprehension. Typically these N400 effects are later than N400 effects in native comprehension, suggesting that semantic processing in one’s second language (L2) may be delayed compared to one’s first language (L1). In this study we were firstly interested in replicating the semantic incongruity effect using natural auditory speech, which poses strong demands on the speed of processing. Secondly, we wished to investigate whether a possible delay in semantic processing might be due to bilinguals accessing lexical items from both their L1 and L2 (a more extensive lexical search). We recorded EEG from 30 Dutch-English bilinguals who listened to English sentences � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � ��� � � in which the sentence-final word was: (1) semantically fitting, (2) semantically incongruent, (3) initially congruent: semantically incongruent, but sharing initial phonemes with the most probable sentence completion within the L2, (4) semantically incongruent, but sharing initial phonemes with the L1 translation equivalent of the most probable sentence completion. We found an N400 effect in each of the semantically incongruent conditions. This N400 effect was significantly delayed to L2 words that were initially congruent with the sentence context. We found no effect of initial overlap with L1 translation equivalents. Taken together these findings firstly demonstrate that non-native listeners are sensitive to semantic incongruity in natural speech, secondly indicate that semantic integration in non-native listening can start on the basis of word initial phonemes, and finally suggest that during L2 sentence processing listeners do not access the L1 lexicon.
  • Flecken, M., Carroll, M., Weimar, K., & Von Stutterheim, C. (2015). Driving along the road or heading for the village? Conceptual differences underlying motion event encoding in French, German, and French-German L2 users. Modern Language Journal, 99(S1), 100-122. doi:10.1111/j.1540-4781.2015.12181.x.

    Abstract

    The typological contrast between verb- and satellite-framed languages (Talmy, 1985) has set the basis for many empirical studies on L2 acquisition. The current analysis goes beyond this typology by looking in detail at the conceptualization of the path of motion in a motion event. We take as a starting point the cognitive salience of specific elements of motion events that are relevant when conceptualizing space. When expressing direction in French, specific spatial relations involving the entity in motion (its alignment and its distance toward a [potential] endpoint) are relevant, given a variety of path verbs in the lexicon expressing this information (e.g., se diriger vers, avancer to direct oneself toward,' to advance'). This is not the case in German (manner verbs in the lexicon mainly). In German, spatial information is packaged in adjuncts and particles and the path of motion is typically structured via features of the ground (entlanglaufen/fahren to walk/drive along') or endpoints (to walk/drive to/toward'). We investigate those fundamental differences in spatial conceptualization in French and German, as reflected in pre-articulatory patterns of attention allocation (measured with eye tracking) to moving entities and endpoints in motion scenes in an event description task. Our focus is on spatial conceptualization in an L2 (French L2 users of German), analyzing the extent to which these L2 users display target-like patterns or traces of L1 conceptualization transfer. Findings show that, in line with directional concepts expressed in verbs, L1 French speakers allocate more attention to entities in motion and endpoints, before utterance onset, than L1 German speakers do. The L2 German speakers pattern with L1 German speakers in the use of manner verbs, but they have not fully acquired the spatial concepts and means that structure the path of motion in the L2. This is reflected in pre-articulatory attention allocation patterns, according to which the L2 speakers pattern with native speakers of their L1 (French). The findings show a continued deep entrenchment of L1-based processing patterns and spatial frames of reference when speakers prepare for speech in an L2
  • Flecken, M., Gerwien, J., Carroll, M., & von Stutterheim, C. (2015). Analyzing gaze allocation during language planning: A cross-linguistic study on dynamic events. Language and Cognition, 7(1), 138-166. doi:10.1017/langcog.2014.20.

    Abstract

    Studies on gaze allocation during sentence production have recently begun to implement cross-linguistic analyses in the investigation of visual and linguistic processing. The underlying assumption is that the aspects of a scene that attract attention prior to articulation are, in part, linked to the specifi c linguistic system and means used for expression. The present study concerns naturalistic, dynamic scenes (video clips) showing causative events (agent acting on an object) and exploits grammatical diff erences in the domain of verbal aspect, and the way in which the status of an event (a specifi c vs. habitual instance of an event) is encoded in English and German. Fixations in agent and action areas of interest were timelocked to utterance onset, and we focused on the pre-articulatory time span to shed light on sentence planning processes, involving message generation and scene conceptualization.
  • Flecken, M., Walbert, K., & Dijkstra, T. (2015). ‘Right now, Sophie ∗swims in the pool?!’: Brain potentials of grammatical aspect processing. Frontiers in Psychology, 6: 1764. doi:10.3389/fpsyg.2015.01764.

    Abstract

    We investigated whether brain potentials of grammatical aspect processing resemble semantic or morpho-syntactic processing, or whether they instead are characterized by an entirely distinct pattern in the same individuals. We studied aspect from the perspective of agreement between the temporal information in the context (temporal adverbials, e.g., Right now) and a morpho-syntactic marker of grammatical aspect (e.g., progressive is swimming). Participants read questions providing a temporal context that was progressive (What is Sophie doing in the pool right now?) or habitual (What does Sophie do in the pool every Monday?). Following a lead-in sentence context such as Right now, Sophie…, we measured event-related brain potentials (ERPs) time-locked to verb phrases in four different conditions, e.g., (a) is swimming (control); (b) ∗is cooking (semantic violation); (c) ∗are swimming (morpho-syntactic violation); or (d)?swims (aspect mismatch); …in the pool.” The collected ERPs show typical N400 and P600 effects for semantics and morpho-syntax, while aspect processing elicited an Early Negativity (250–350 ms). The aspect-related Negativity was short-lived and had a central scalp distribution with an anterior onset. This differentiates it not only from the semantic N400 effect, but also from the typical LAN (Left Anterior Negativity), that is frequently reported for various types of agreement processing. Moreover, aspect processing did not show a clear P600 modulation. We argue that the specific context for each item in this experiment provided a trigger for agreement checking with temporal information encoded on the verb, i.e., morphological aspect marking. The aspect-related Negativity obtained for aspect agreement mismatches reflects a violated expectation concerning verbal inflection (in the example above, the expected verb phrase was Sophie is X-ing rather than Sophie X-s in condition d). The absence of an additional P600 for aspect processing suggests that the mismatch did not require additional reintegration or processing costs. This is consistent with participants’ post hoc grammaticality judgements of the same sentences, which overall show a high acceptability of aspect mismatch sentences.

    Additional information

    data sheet 1.docx
  • Flecken, M., Athanasopoulos, P., Kuipers, J. R., & Thierry, G. (2015). On the road to somewhere: Brain potentials reflect language effects on motion event perception. Cognition, 141, 41-51. doi:10.1016/j.cognition.2015.04.006.

    Abstract

    Recent studies have identified neural correlates of language effects on perception in static domains of experience such as colour and objects. The generalization of such effects to dynamic domains like motion events remains elusive. Here, we focus on grammatical differences between languages relevant for the description of motion events and their impact on visual scene perception. Two groups of native speakers of German or English were presented with animated videos featuring a dot travelling along a trajectory towards a geometrical shape (endpoint). English is a language with grammatical aspect in which attention is drawn to trajectory and endpoint of motion events equally. German, in contrast, is a non-aspect language which highlights endpoints. We tested the comparative perceptual saliency of trajectory and endpoint of motion events by presenting motion event animations (primes) followed by a picture symbolising the event (target): In 75% of trials, the animation was followed by a mismatching picture (both trajectory and endpoint were different); in 10% of trials, only the trajectory depicted in the picture matched the prime; in 10% of trials, only the endpoint matched the prime; and in 5% of trials both trajectory and endpoint were matching, which was the condition requiring a response from the participant. In Experiment 1 we recorded event-related brain potentials elicited by the picture in native speakers of German and native speakers of English. German participants exhibited a larger P3 wave in the endpoint match than the trajectory match condition, whereas English speakers showed no P3 amplitude difference between conditions. In Experiment 2 participants performed a behavioural motion matching task using the same stimuli as those used in Experiment 1. German and English participants did not differ in response times showing that motion event verbalisation cannot readily account for the difference in P3 amplitude found in the first experiment. We argue that, even in a non-verbal context, the grammatical properties of the native language and associated sentence-level patterns of event encoding influence motion event perception, such that attention is automatically drawn towards aspects highlighted by the grammar.
  • Flecken, M., & Schmiedtova, B. (2007). The expression of simultaneity in L1 Dutch. Toegepaste Taalwetenschap in Artikelen, 77(1), 67-78.
  • Floyd, S. (2007). Changing times and local terms on the Rio Negro, Brazil: Amazonian ways of depolarizing epistemology, chronology and cultural Change. Latin American and Caribbean Ethnic studies, 2(2), 111-140. doi:10.1080/17442220701489548.

    Abstract

    Partway along the vast waterways of Brazil's middle Rio Negro, upstream from urban Manaus and downstream from the ethnographically famous Northwest Amazon region, is the town of Castanheiro, whose inhabitants skillfully negotiate a space between the polar extremes of 'traditional' and 'acculturated.' This paper takes an ethnographic look at the non-polarizing terms that these rural Amazonian people use for talking about cultural change. While popular and academic discourses alike have often framed cultural change in the Amazon as a linear process, Amazonian discourse provides resources for describing change as situated in shifting fields of knowledge of the social and physical environments, better capturing its non-linear complexity and ambiguity.
  • Floyd, S. (2015). Other-initiated repair in Cha’palaa. Open linguistics, 1(1), 467-489. doi:10.1515/opli-2015-0014.

    Abstract

    This article describes the interactional patterns and linguistic structures associated with otherinitiated repair, as observed in a corpus of video-recorded conversation in the Cha’palaa (a Barbacoan language spoken in north-western Ecuador). Special attention is given to the relation of repair formats to the morphosyntactic and intonational systems of the language. It examines the distinctive falling intonation observed with interjections and content question formats and the pattern of a held mid-high tone observed in polarity questions, as well as the function of Cha’palaa grammatical features such as the case marking system, the nominal classifiers and the verb classification system as formats for repair initiation. It considers a selection of examples from a video corpus to illustrate a broad range of sequence types of opened and restricted other-initiated repair, noting that Cha’palaa had the highest relative rate of open repair in the cross-linguistic sample. It also considers the extension of OIR to other practices such as news uptake and disagreement in the Cha’palaa corpus.
  • Floyd, S., San Roque, L., & Majid, A. (2018). Smell is coded in grammar and frequent in discourse: Cha'palaa olfactory language in cross-linguistic perspective. Journal of Linguistic Anthropology, 28(2), 175-196. doi:10.1111/jola.12190.

    Abstract

    It has long been claimed that there is no lexical field of smell, and that smell is of too little validity to be expressed in grammar. We demonstrate both claims are false. The Cha'palaa language (Ecuador) has at least 15 abstract smell terms, each of which is formed using a type of classifier previously thought not to exist. Moreover, using conversational corpora we show that Cha'palaa speakers also talk about smell more than Imbabura Quechua and English speakers. Together, this shows how language and social interaction may jointly reflect distinct cultural orientations towards sensory experience in general and olfaction in particular.
  • Floyd, S. (2015). Transparência semântica e o ‘calque’ cultural no noroeste amazônico [Portuguese transl. of Semantic transparency and cultural calquing in the Northwest Amazon, 2013]. Wamon: Revista dos alunos do PpGas/UFAM, 1(1), 95-117. Retrieved from http://www.periodicos.ufam.edu.br/index.php/wamon/article/view/946.

    Abstract

    The ethnographic literature has described the northwest Amazon as an area of shared culture across linguistic groups. This paper illustrates how a principle of semantic transparency across languages is a key means of establishing elements of a common regional culture through practices like the calquing of ethnonyms and toponyms so that they are semantically, but not phonologically, equivalent across languages. It places the northwest Amazon in a general discussion of cross-linguistic naming practices in South America and considers the extent to which a preference for semantic transparency can be linked to cases of widespread cultural “calquing”. It also addresses the principle of semantic transparency beyond specific referential phrases and into larger discourse structures. It concludes that an attention to semiotic practices in multilingual settings can provide new and more complex ways of thinking about the idea of shared culture
  • Floyd, S., Rossi, G., Baranova, J., Blythe, J., Dingemanse, M., Kendrick, K. H., Zinken, J., & Enfield, N. J. (2018). Universals and cultural diversity in the expression of gratitude. Royal Society Open Science, 5: 180391. doi:10.1098/rsos.180391.

    Abstract

    Gratitude is argued to have evolved to motivate and maintain social reciprocity among people, and to be linked to a wide range of positive effects — social, psychological, and even physical. But is socially reciprocal behaviour dependent on the expression of gratitude, for example by saying "thank you" as in English? Current research has not included cross-cultural elements, and has tended to conflate gratitude as an emotion with gratitude as a linguistic practice, as might appear to be the case in English. Here we ask to what extent people actually express gratitude in different societies by focussing on episodes of everyday life where someone obtains a good, service, or support from another, and comparing these episodes across eight languages from five continents. What we find is that expressions of gratitude in these episodes are remarkably rare, suggesting that social reciprocity in everyday life relies on tacit understandings of people’s rights and duties surrounding mutual assistance and collaboration. At the same time, we also find minor cross-cultural variation, with slightly higher rates in Western European languages English and Italian, showing that universal tendencies of social reciprocity should not be conflated with more culturally variable practices of expressing gratitude. Our study complements previous experimental and culture-specific research on social reciprocity with a systematic comparison of audiovisual corpora of naturally occurring social interaction from different cultures from around the world.
  • Forkel, S. J. (2015). Heinrich Sachs (1863–1928). Journal of Neurology, 262, 498-500. doi:10.1007/s00415-014-7517-2.

    Abstract

    The nineteenth century witnessed some of the greatest neuroanatomists of all times. Amongst them is the largely forgotten Heinrich Sachs, a student of Carl Wernicke in Breslau.
  • Forkel, S. J., & Catani, M. (2018). Lesion mapping in acute stroke aphasia and its implications for recovery. Neuropsychologia, 115, 88-100. doi:10.1016/j.neuropsychologia.2018.03.036.

    Abstract

    Patients with stroke offer a unique window into understanding human brain function. Mapping stroke lesions poses several challenges due to the complexity of the lesion anatomy and the mechanisms causing local and remote disruption on brain networks. In this prospective longitudinal study, we compare standard and advanced approaches to white matter lesion mapping applied to acute stroke patients with aphasia. Eighteen patients with acute left hemisphere stroke were recruited and scanned within two weeks from symptom onset. Aphasia assessment was performed at baseline and six-month follow-up. Structural and diffusion MRI contrasts indicated an area of maximum overlap in the anterior external/extreme capsule with diffusion images showing a larger overlap extending into posterior perisylvian regions. Anatomical predictors of recovery included damage to ipsilesional tracts (as shown by both structural and diffusion images) and contralesional tracts (as shown by diffusion images only). These findings indicate converging results from structural and diffusion lesion mapping methods but also clear differences between the two approaches in their ability to identify predictors of recovery outside the lesioned regions.
  • Forkel, S. J., Mahmood, S., Vergani, F., & Catani, M. (2015). The white matter of the human cerebrum: Part I The occipital lobe by Heinrich Sachs. Cortex, 62, 182-202. doi:10.1016/j.cortex.2014.10.023.

    Abstract

    This is the first complete translation of Heinrich Sachs' outstanding white matter atlas dedicated to the occipital lobe. This work is accompanied by a prologue by Prof Carl Wernicke who for many years was Sachs' mentor in Breslau and enthusiastically supported his work.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's social categorization (out-group member) affect memory and the credibility of the message (e.g., less trustworthiness). In the present study, we go one step further and ask whether evaluations of messages are also affected by regional accents—accents from a different region than the listener. In the current study, we report results from three experiments on immediate memory recognition and immediate credibility assessments as well as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same region as the participant—and regional accents—from native speakers of a different country than the participants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by accent does not seem to negatively affect how we treat the speakers' messages.
  • Frances, C., Costa, A., & Baus, C. (2018). On the effects of regional accents on memory and credibility. Acta Psychologica, 186, 63-70. doi:10.1016/j.actpsy.2018.04.003.

    Abstract

    The information we obtain from how speakers sound—for example their accent—affects how we interpret the
    messages they convey. A clear example is foreign accented speech, where reduced intelligibility and speaker's
    social categorization (out-group member) affect memory and the credibility of the message (e.g., less trust-
    worthiness). In the present study, we go one step further and ask whether evaluations of messages are also
    affected by regional accents—accents from a different region than the listener. In the current study, we report
    results from three experiments on immediate memory recognition and immediate credibility assessments as well
    as the illusory truth effect. These revealed no differences between messages conveyed in local—from the same
    region as the participant—and regional accents—from native speakers of a different country than the partici-
    pants. Our results suggest that when the accent of a speaker has high intelligibility, social categorization by
    accent does not seem to negatively affect how we treat the speakers' messages.
  • Francisco, A. A., Takashima, A., McQueen, J. M., Van den Bunt, M., Jesse, A., & Groen, M. A. (2018). Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia, 117, 454-471. doi:10.1016/j.neuropsychologia.2018.07.009.

    Abstract

    The aim of the present fMRI study was to investigate whether typical and dyslexic adult readers differed in the neural correlates of audiovisual speech processing. We tested for Blood Oxygen-Level Dependent (BOLD) activity differences between these two groups in a 1-back task, as they processed written (word, illegal consonant strings) and spoken (auditory, visual and audiovisual) stimuli. When processing written stimuli, dyslexic readers showed reduced activity in the supramarginal gyrus, a region suggested to play an important role in phonological processing, but only when they processed strings of consonants, not when they read words. During the speech perception tasks, dyslexic readers were only slower than typical readers in their behavioral responses in the visual speech condition. Additionally, dyslexic readers presented reduced neural activation in the auditory, the visual, and the audiovisual speech conditions. The groups also differed in terms of superadditivity, with dyslexic readers showing decreased neural activation in the regions of interest. An additional analysis focusing on vision-related processing during the audiovisual condition showed diminished activation for the dyslexic readers in a fusiform gyrus cluster. Our results thus suggest that there are differences in audiovisual speech processing between dyslexic and normal readers. These differences might be explained by difficulties in processing the unisensory components of audiovisual speech, more specifically, dyslexic readers may benefit less from visual information during audiovisual speech processing than typical readers. Given that visual speech processing supports the development of phonological skills fundamental in reading, differences in processing of visual speech could contribute to differences in reading ability between typical and dyslexic readers.
  • Francken, J. C., Meijs, E. L., Ridderinkhof, O. M., Hagoort, P., de Lange, F. P., & van Gaal, S. (2015). Manipulating word awareness dissociates feed-forward from feedback models of language-perception interactions. Neuroscience of consciousness, 1. doi:10.1093/nc/niv003.

    Abstract

    Previous studies suggest that linguistic material can modulate visual perception, but it is unclear at which level of processing these interactions occur. Here we aim to dissociate between two competing models of language–perception interactions: a feed-forward and a feedback model. We capitalized on the fact that the models make different predictions on the role of feedback. We presented unmasked (aware) or masked (unaware) words implying motion (e.g. “rise,” “fall”), directly preceding an upward or downward visual motion stimulus. Crucially, masking leaves intact feed-forward information processing from low- to high-level regions, whereas it abolishes subsequent feedback. Under this condition, participants remained faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. This suggests that language–perception interactions are driven by the feed-forward convergence of linguistic and perceptual information at higher-level conceptual and decision stages.
  • Francken, J. C., Meijs, E. L., Hagoort, P., van Gaal, S., & de Lange, F. P. (2015). Exploring the automaticity of language-perception interactions: Effects of attention and awareness. Scientific Reports, 5: 17725. doi:10.1038/srep17725.

    Abstract

    Previous studies have shown that language can modulate visual perception, by biasing and/
    or enhancing perceptual performance. However, it is still debated where in the brain visual and
    linguistic information are integrated, and whether the effects of language on perception are
    automatic and persist even in the absence of awareness of the linguistic material. Here, we aimed
    to explore the automaticity of language-perception interactions and the neural loci of these
    interactions in an fMRI study. Participants engaged in a visual motion discrimination task (upward
    or downward moving dots). Before each trial, a word prime was briefly presented that implied
    upward or downward motion (e.g., “rise”, “fall”). These word primes strongly influenced behavior:
    congruent motion words sped up reaction times and improved performance relative to incongruent
    motion words. Neural congruency effects were only observed in the left middle temporal gyrus,
    showing higher activity for congruent compared to incongruent conditions. This suggests that higherlevel
    conceptual areas rather than sensory areas are the locus of language-perception interactions.
    When motion words were rendered unaware by means of masking, they still affected visual motion
    perception, suggesting that language-perception interactions may rely on automatic feed-forward
    integration of perceptual and semantic material in language areas of the brain.
  • Francken, J. C., Kok, P., Hagoort, P., & De Lange, F. P. (2015). The behavioral and neural effects of language on motion perception. Journal of Cognitive Neuroscience, 27(1), 175-184. doi:10.1162/jocn_a_00682.

    Abstract

    Perception does not function as an isolated module but is tightly linked with other cognitive functions. Several studies have demonstrated an influence of language on motion perception, but it remains debated at which level of processing this modulation takes place. Some studies argue for an interaction in perceptual areas, but it is also possible that the interaction is mediated by "language areas" that integrate linguistic and visual information. Here, we investigated whether language-perception interactions were specific to the language-dominant left hemisphere by comparing the effects of language on visual material presented in the right (RVF) and left visual fields (LVF). Furthermore, we determined the neural locus of the interaction using fMRI. Participants performed a visual motion detection task. On each trial, the visual motion stimulus was presented in either the LVF or in the RVF, preceded by a centrally presented word (e.g., "rise"). The word could be congruent, incongruent, or neutral with regard to the direction of the visual motion stimulus that was presented subsequently. Participants were faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. Interestingly, the speed benefit was present only for motion stimuli that were presented in the RVF. We observed a neural counterpart of the behavioral facilitation effects in the left middle temporal gyrus, an area involved in semantic processing of verbal material. Together, our results suggest that semantic information about motion retrieved in language regions may automatically modulate perceptual decisions about motion.
  • Francks, C. (2015). Exploring human brain lateralization with molecular genetics and genomics. Annals of the New York Academy of Sciences, 1359, 1-13. doi:10.1111/nyas.12770.

    Abstract

    Lateralizations of brain structure and motor behavior have been observed in humans as early as the first trimester of gestation, and are likely to arise from asymmetrical genetic–developmental programs, as in other animals. Studies of gene expression levels in postmortem tissue samples, comparing the left and right sides of the human cerebral cortex, have generally not revealed striking transcriptional differences between the hemispheres. This is likely due to lateralization of gene expression being subtle and quantitative. However, a recent re-analysis and meta-analysis of gene expression data from the adult superior temporal and auditory cortex found lateralization of transcription of genes involved in synaptic transmission and neuronal electrophysiology. Meanwhile, human subcortical mid- and hindbrain structures have not been well studied in relation to lateralization of gene activity, despite being potentially important developmental origins of asymmetry. Genetic polymorphisms with small effects on adult brain and behavioral asymmetries are beginning to be identified through studies of large datasets, but the core genetic mechanisms of lateralized human brain development remain unknown. Identifying subtly lateralized genetic networks in the brain will lead to a new understanding of how neuronal circuits on the left and right are differently fine-tuned to preferentially support particular cognitive and behavioral functions.
  • Francks, C., Maegawa, S., Laurén, J., Abrahams, B. S., Velayos-Baeza, A., Medland, S. E., Colella, S., Groszer, M., McAuley, E. Z., Caffrey, T. M., Timmusk, T., Pruunsild, P., Koppel, I., Lind, P. A., Matsumoto-Itaba, N., Nicod, J., Xiong, L., Joober, R., Enard, W., Krinsky, B. and 22 moreFrancks, C., Maegawa, S., Laurén, J., Abrahams, B. S., Velayos-Baeza, A., Medland, S. E., Colella, S., Groszer, M., McAuley, E. Z., Caffrey, T. M., Timmusk, T., Pruunsild, P., Koppel, I., Lind, P. A., Matsumoto-Itaba, N., Nicod, J., Xiong, L., Joober, R., Enard, W., Krinsky, B., Nanba, E., Richardson, A. J., Riley, B. P., Martin, N. G., Strittmatter, S. M., Möller, H.-J., Rujescu, D., St Clair, D., Muglia, P., Roos, J. L., Fisher, S. E., Wade-Martins, R., Rouleau, G. A., Stein, J. F., Karayiorgou, M., Geschwind, D. H., Ragoussis, J., Kendler, K. S., Airaksinen, M. S., Oshimura, M., DeLisi, L. E., & Monaco, A. P. (2007). LRRTM1 on chromosome 2p12 is a maternally suppressed gene that is associated paternally with handedness and schizophrenia. Molecular Psychiatry, 12, 1129-1139. doi:10.1038/sj.mp.4002053.

    Abstract

    Left-right asymmetrical brain function underlies much of human cognition, behavior and emotion. Abnormalities of cerebral asymmetry are associated with schizophrenia and other neuropsychiatric disorders. The molecular, developmental and evolutionary origins of human brain asymmetry are unknown. We found significant association of a haplotype upstream of the gene LRRTM1 (Leucine-rich repeat transmembrane neuronal 1) with a quantitative measure of human handedness in a set of dyslexic siblings, when the haplotype was inherited paternally (P=0.00002). While we were unable to find this effect in an epidemiological set of twin-based sibships, we did find that the same haplotype is overtransmitted paternally to individuals with schizophrenia/schizoaffective disorder in a study of 1002 affected families (P=0.0014). We then found direct confirmatory evidence that LRRTM1 is an imprinted gene in humans that shows a variable pattern of maternal downregulation. We also showed that LRRTM1 is expressed during the development of specific forebrain structures, and thus could influence neuronal differentiation and connectivity. This is the first potential genetic influence on human handedness to be identified, and the first putative genetic effect on variability in human brain asymmetry. LRRTM1 is a candidate gene for involvement in several common neurodevelopmental disorders, and may have played a role in human cognitive and behavioral evolution.
  • Francks, C. (2009). Understanding the genetics of behavioural and psychiatric traits will only be achieved through a realistic assessment of their complexity. Laterality: Asymmetries of Body, Brain and Cognition, 14(1), 11-16. doi:10.1080/13576500802536439.

    Abstract

    Francks et al. (2007) performed a recent study in which the first putative genetic effect on human handedness was identified (the imprinted locus LRRTM1 on human chromosome 2). In this issue of Laterality, Tim Crow and colleagues present a critique of that study. The present paper presents a personal response to that critique which argues that Francks et al. (2007) published a substantial body of evidence implicating LRRTM1 in handedness and schizophrenia. Progress will now be achieved by others trying to validate, refute, or extend those findings, rather than by further armchair discussion.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2007). Coherence-driven resolution of referential ambiguity: A computational model. Memory & Cognition, 35(6), 1307-1322.

    Abstract

    We present a computational model that provides a unified account of inference, coherence, and disambiguation. It simulates how the build-up of coherence in text leads to the knowledge-based resolution of referential ambiguity. Possible interpretations of an ambiguity are represented by centers of gravity in a high-dimensional space. The unresolved ambiguity forms a vector in the same space. This vector is attracted by the centers of gravity, while also being affected by context information and world knowledge. When the vector reaches one of the centers of gravity, the ambiguity is resolved to the corresponding interpretation. The model accounts for reading time and error rate data from experiments on ambiguous pronoun resolution and explains the effects of context informativeness, anaphor type, and processing depth. It shows how implicit causality can have an early effect during reading. A novel prediction is that ambiguities can remain unresolved if there is insufficient disambiguating information.
  • Frank, S. L., & Yang, J. (2018). Lexical representation explains cortical entrainment during speech comprehension. PLoS One, 13(5): e0197304. doi:10.1371/journal.pone.0197304.

    Abstract

    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even
    though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the
    cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
  • Franken, M. K., Hagoort, P., & Acheson, D. J. (2015). Modulations of the auditory M100 in an Imitation Task. Brain and Language, 142, 18-23. doi:10.1016/j.bandl.2015.01.001.

    Abstract

    Models of speech production explain event-related suppression of the auditory cortical
    response as reflecting a comparison between auditory predictions and feedback. The present MEG
    study was designed to test two predictions from this framework: 1) whether the reduced auditory
    response varies as a function of the mismatch between prediction and feedback; 2) whether individual
    variation in this response is predictive of speech-motor adaptation.
    Participants alternated between online imitation and listening tasks. In the imitation task, participants
    began each trial producing the same vowel (/e/) and subsequently listened to and imitated auditorilypresented
    vowels varying in acoustic distance from /e/.
    Results replicated suppression, with a smaller M100 during speaking than listening. Although we did
    not find unequivocal support for the first prediction, participants with less M100 suppression were
    better at the imitation task. These results are consistent with the enhancement of M100 serving as an
    error signal to drive subsequent speech-motor adaptation.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Frazier, T., Embacher, R., Tilot, A. K., Koenig, K., Mester, J., & Eng, C. (2015). Molecular and phenotypic abnormalities in individuals with germline heterozygous PTEN mutations and autism. Molecular Psychiatry., 20, 1132-1138. doi:10.1038/mp.2014.125.

    Abstract

    PTEN is a tumor suppressor associated with an inherited cancer syndrome and an important regulator of ongoing neural connectivity and plasticity. The present study examined molecular and phenotypic characteristics of individuals with germline heterozygous PTEN mutations and autism spectrum disorder (ASD) (PTEN-ASD), with the aim of identifying pathophysiologic markers that specifically associate with PTEN-ASD and that may serve as targets for future treatment trials. PTEN-ASD patients (n=17) were compared with idiopathic (non-PTEN) ASD patients with (macro-ASD, n=16) and without macrocephaly (normo-ASD, n=38) and healthy controls (n=14). Group differences were evaluated for PTEN pathway protein expression levels, global and regional structural brain volumes and cortical thickness measures, neurocognition and adaptive behavior. RNA expression patterns and brain characteristics of a murine model of Pten mislocalization were used to further evaluate abnormalities observed in human PTEN-ASD patients. PTEN-ASD had a high proportion of missense mutations and showed reduced PTEN protein levels. Compared with the other groups, prominent white-matter and cognitive abnormalities were specifically associated with PTEN-ASD patients, with strong reductions in processing speed and working memory. White-matter abnormalities mediated the relationship between PTEN protein reductions and reduced cognitive ability. The Ptenm3m4 murine model had differential expression of genes related to myelination and increased corpus callosum. Processing speed and working memory deficits and white-matter abnormalities may serve as useful features that signal clinicians that PTEN is etiologic and prompting referral to genetic professionals for gene testing, genetic counseling and cancer risk management; and could reveal treatment targets in trials of treatments for PTEN-ASD.
  • French, C. A., Groszer, M., Preece, C., Coupe, A.-M., Rajewsky, K., & Fisher, S. E. (2007). Generation of mice with a conditional Foxp2 null allele. Genesis, 45(7), 440-446. doi:10.1002/dvg.20305.

    Abstract

    Disruptions of the human FOXP2 gene cause problems with articulation of complex speech sounds, accompanied by impairment in many aspects of language ability. The FOXP2/Foxp2 transcription factor is highly similar in humans and mice, and shows a complex conserved expression pattern, with high levels in neuronal subpopulations of the cortex, striatum, thalamus, and cerebellum. In the present study we generated mice in which loxP sites flank exons 12-14 of Foxp2; these exons encode the DNA-binding motif, a key functional domain. We demonstrate that early global Cre-mediated recombination yields a null allele, as shown by loss of the loxP-flanked exons at the RNA level and an absence of Foxp2 protein. Homozygous null mice display severe motor impairment, cerebellar abnormalities and early postnatal lethality, consistent with other Foxp2 mutants. When crossed to transgenic lines expressing Cre protein in a spatially and/or temporally controlled manner, these conditional mice will provide new insights into the contributions of Foxp2 to distinct neural circuits, and allow dissection of roles during development and in the mature brain.
  • Friedlaender, J., Hunley, K., Dunn, M., Terrill, A., Lindström, E., Reesink, G., & Friedlaender, F. (2009). Linguistics more robust than genetics [Letter to the editor]. Science, 324, 464-465. doi:10.1126/science.324_464c.
  • Furman, R., & Ozyurek, A. (2007). Development of interactional discourse markers: Insights from Turkish children's and adults' narratives. Journal of Pragmatics, 39(10), 1742-1757. doi:10.1016/j.pragma.2007.01.008.

    Abstract

    Discourse markers (DMs) are linguistic elements that index different relations and coherence between units of talk (Schiffrin, Deborah, 1987. Discourse Markers. Cambridge University Press, Cambridge). Most research on the development of these forms has focused on conversations rather than narratives and furthermore has not directly compared children's use of DMs to adult usage. This study examines the development of three DMs (şey ‘uuhh’, yani ‘I mean’, işte ‘y’know’) that mark interactional levels of discourse in oral Turkish narratives in 60 Turkish children (3-, 5- and 9-year-olds) and 20 Turkish-speaking adults. The results show that the frequency and functions of DMs change with age. Children learn şey, which mainly marks exchange level structures, earliest. However, yani and işte have multi-functions such as marking both information states and participation frameworks and are consequently learned later. Children also use DMs with different functions than adults. Overall, the results show that learning to use interactional DMs in narratives is complex and goes beyond age 9, especially for multi-functional DMs that index an interplay of discourse coherence at different levels.
  • Fusaroli, R., Perlman, M., Mislove, A., Paxton, A., Matlock, T., & Dale, R. (2015). Timescales of massive human entrainment. PLoS One, 10: e0122742. doi:10.1371/journal.pone.0122742.

    Abstract

    The past two decades have seen an upsurge of interest in the collective behaviors of complex systems composed of many agents entrained to each other and to external events. In this paper, we extend the concept of entrainment to the dynamics of human collective attention. We conducted a detailed investigation of the unfolding of human entrainment—as expressed by the content and patterns of hundreds of thousands of messages on Twitter—during the 2012 US presidential debates. By time-locking these data sources, we quantify the impact of the unfolding debate on human attention at three time scales. We show that collective social behavior covaries second-by-second to the interactional dynamics of the debates: A candidate speaking induces rapid increases in mentions of his name on social media and decreases in mentions of the other candidate. Moreover, interruptions by an interlocutor increase the attention received. We also highlight a distinct time scale for the impact of salient content during the debates: Across well-known remarks in each debate, mentions in social media start within 5–10 seconds after it occurs; peak at approximately one minute; and slowly decay in a consistent fashion across well-known events during the debates. Finally, we show that public attention after an initial burst slowly decays through the course of the debates. Thus we demonstrate that large-scale human entrainment may hold across a number of distinct scales, in an exquisitely time-locked fashion. The methods and results pave the way for careful study of the dynamics and mechanisms of large-scale human entrainment.
  • Galizia, E. C., Myers, C. T., Leu, C., De Kovel, C. G. F., Afrikanova, T., Cordero-Maldonado, M. L., Martins, T. G., Jacmin, M., Drury, S., Chinthapalli, V. K., Muhle, H., Pendziwiat, M., Sander, T., Ruppert, A. K., Moller, R. S., Thiele, H., Krause, R., Schubert, J., Lehesjoki, A. E., Nurnberg, P. and 28 moreGalizia, E. C., Myers, C. T., Leu, C., De Kovel, C. G. F., Afrikanova, T., Cordero-Maldonado, M. L., Martins, T. G., Jacmin, M., Drury, S., Chinthapalli, V. K., Muhle, H., Pendziwiat, M., Sander, T., Ruppert, A. K., Moller, R. S., Thiele, H., Krause, R., Schubert, J., Lehesjoki, A. E., Nurnberg, P., Lerche, H., Palotie, A., Coppola, A., Striano, S., Del Gaudio, L., Boustred, C., Schneider, A. L., Lench, N., Jocic-Jakubi, B., Covanis, A., Capovilla, G., Veggiotti, P., Piccioli, M., Parisi, P., Cantonetti, L., Sadleir, L. G., Mullen, S. A., Berkovic, S. F., Stephani, U., Helbig, I., Crawford, A. D., Esguerra, C. V., Trenite, D., Koeleman, B. P. C., Mefford, H. C., Scheffer, I. E., Sisodiya, S. M., & EURO Epinomics CoGIE Consortium (2015). CHD2 variants are a risk factor for photosensitivity in epilepsy. Brain, 138(5), 1198-1207. doi:10.1093%2Fbrain%2Fawv052.

    Abstract

    Photosensitivity is a heritable abnormal cortical response to flickering light, manifesting as particular electroencephalographic changes, with or without seizures. Photosensitivity is prominent in a very rare epileptic encephalopathy due to de novo CHD2 mutations, but is also seen in epileptic encephalopathies due to other gene mutations. We determined whether CHD2 variation underlies photosensitivity in common epilepsies, specific photosensitive epilepsies and individuals with photosensitivity without seizures. We studied 580 individuals with epilepsy and either photosensitive seizures or abnormal photoparoxysmal response on electroencephalography, or both, and 55 individuals with photoparoxysmal response but no seizures. We compared CHD2 sequence data to publicly available data from 34 427 individuals, not enriched for epilepsy. We investigated the role of unique variants seen only once in the entire data set. We sought CHD2 variants in 238 exomes from familial genetic generalized epilepsies, and in other public exome data sets. We identified 11 unique variants in the 580 individuals with photosensitive epilepsies and 128 unique variants in the 34 427 controls: unique CHD2 variation is over-represented in cases overall (P = 2·17 × 10−5). Among epilepsy syndromes, there was over-representation of unique CHD2 variants (3/36 cases) in the archetypal photosensitive epilepsy syndrome, eyelid myoclonia with absences (P = 3·50 × 10−4). CHD2 variation was not over-represented in photoparoxysmal response without seizures. Zebrafish larvae with chd2 knockdown were tested for photosensitivity. Chd2 knockdown markedly enhanced mild innate zebrafish larval photosensitivity. CHD2 mutation is the first identified cause of the archetypal generalized photosensitive epilepsy syndrome, eyelid myoclonia with absences. Unique CHD2 variants are also associated with photosensitivity in common epilepsies. CHD2 does not encode an ion channel, opening new avenues for research into human cortical excitability.
  • Galucio, A. V., Meira, S., Birchall, J., Moore, D., Gabas Junior, N., Drude, S., Storto, L., Picanço, G., & Rodrigues, C. R. (2015). Genealogical relations and lexical distances within the Tupian linguistic family. Boletim do Museu Paraense Emilio Goeldi:Ciencias Humanas, 10, 229-274. doi:10.1590/1981-81222015000200004.

    Abstract

    In this paper we present the first results of the application of computational methods, inspired by the ideas in McMahon & McMahon (2005), to a dataset collected from languages of every branch of the Tupian family (including all living non-Tupí-Guaraní languages) in order to produce a classification of the family based on lexical distance. We used both a Swadesh list (with historically stabler terms) and a list of animal and plant names for results comparison. In addition, we also selected more (HiHi) and less (LoLo) stable terms from the Swadesh list to form sublists for indepedent treatment. We compared the resulting NeighborNet networks and neighbor-joining cladograms and drew conclusions about their significance for the current understanding of the classification of Tupian languages. One important result is the lack of support for the currently discussed idea of an Eastern-Western division within Tupí
  • Ganushchak, L. Y., & Schiller, N. O. (2009). Speaking in one’s second language under time pressure: An ERP study on verbal self-monitoring in German-Dutch bilinguals. Psychophysiology, 46, 410-419. doi:10.1111/j.1469-8986.2008.00774.x.

    Abstract

    This study addresses how verbal self-monitoring and the Error-Related Negativity (ERN) are affected by time pressure
    when a task is performed in a second language as opposed to performance in the native language. German–Dutch
    bilinguals were required to perform a phoneme-monitoring task in Dutch with and without a time pressure manipulation.
    We obtained an ERN following verbal errors that showed an atypical increase in amplitude under time
    pressure. This finding is taken to suggest that under time pressure participants had more interference from their native
    language, which in turn led to a greater response conflict and thus enhancement of the amplitude of the ERN. This
    result demonstrates once more that the ERN is sensitive to psycholinguistic manipulations and suggests that the
    functioning of the verbal self-monitoring systemduring speaking is comparable to other performance monitoring, such
    as action monitoring.
  • Gao, X., & Jiang, T. (2018). Sensory constraints on perceptual simulation during sentence reading. Journal of Experimental Psychology: Human Perception and Performance, 44(6), 848-855. doi:10.1037/xhp0000475.

    Abstract

    Resource-constrained models of language processing predict that perceptual simulation during language understanding would be compromised by sensory limitations (such as reading text in unfamiliar/difficult font), whereas strong versions of embodied theories of language would predict that simulating perceptual symbols in language would not be impaired even under sensory-constrained situations. In 2 experiments, sensory decoding difficulty was manipulated by using easy and hard fonts to study perceptual simulation during sentence reading (Zwaan, Stanfield, & Yaxley, 2002). Results indicated that simulating perceptual symbols in language was not compromised by surface-form decoding challenges such as difficult font, suggesting relative resilience of embodied language processing in the face of certain sensory constraints. Further implications for learning from text and individual differences in language processing will be discussed
  • Garcia, R., Dery, J. E., Roeser, J., & Höhle, B. (2018). Word order preferences of Tagalog-speaking adults and children. First Language, 38(6), 617-640. doi:10.1177/0142723718790317.

    Abstract

    This article investigates the word order preferences of Tagalog-speaking adults and five- and seven-year-old children. The participants were asked to complete sentences to describe pictures depicting actions between two animate entities. Adults preferred agent-initial constructions in the patient voice but not in the agent voice, while the children produced mainly agent-initial constructions regardless of voice. This agent-initial preference, despite the lack of a close link between the agent and the subject in Tagalog, shows that this word order preference is not merely syntactically-driven (subject-initial preference). Additionally, the children’s agent-initial preference in the agent voice, contrary to the adults’ lack of preference, shows that children do not respect the subject-last principle of ordering Tagalog full noun phrases. These results suggest that language-specific optional features like a subject-last principle take longer to be acquired.
  • Garrido, L., Eisner, F., McGettigan, C., Stewart, L., Sauter, D., Hanley, J. R., Schweinberger, S. R., Warren, J. D., & Duchaine, B. (2009). Developmental phonagnosia: A selective deficit of vocal identity recognition. Neuropsychologia, 47(1), 123-131. doi:10.1016/j.neuropsychologia.2008.08.003.

    Abstract

    Phonagnosia, the inability to recognize familiar voices, has been studied in brain-damaged patients but no cases due to developmental problems have been reported. Here we describe the case of KH, a 60-year-old active professional woman who reports that she has always experienced severe voice recognition difficulties. Her hearing abilities are normal, and an MRI scan showed no evidence of brain damage in regions associated with voice or auditory perception. To better understand her condition and to assess models of voice and high-level auditory processing, we tested KH on behavioural tasks measuring voice recognition, recognition of vocal emotions, face recognition, speech perception, and processing of environmental sounds and music. KH was impaired on tasks requiring the recognition of famous voices and the learning and recognition of new voices. In contrast, she performed well on nearly all other tasks. Her case is the first report of developmental phonagnosia, and the results suggest that the recognition of a speaker’s vocal identity depends on separable mechanisms from those used to recognize other information from the voice or non-vocal auditory stimuli.
  • Gascoyne, D. M., Spearman, H., Lyne, L., Puliyadi, R., Perez-Alcantara, M., Coulton, L., Fisher, S. E., Croucher, P. I., & Banham, A. H. (2015). The forkhead transcription factor FOXP2 is required for regulation of p21 WAF1/CIP1 in 143B osteosarcoma cell growth arrest. PLoS One, 10(6): e0128513. doi:10.1371/journal.pone.0128513.

    Abstract

    Mutations of the forkhead transcription factor FOXP2 gene have been implicated in inherited speech-and-language disorders, and specific Foxp2 expression patterns in neuronal populations and neuronal phenotypes arising from Foxp2 disruption have been described. However, molecular functions of FOXP2 are not completely understood. Here we report a requirement for FOXP2 in growth arrest of the osteosarcoma cell line 143B. We observed endogenous expression of this transcription factor both transiently in normally developing murine osteoblasts and constitutively in human SAOS-2 osteosarcoma cells blocked in early osteoblast development. Critically, we demonstrate that in 143B osteosarcoma cells with minimal endogenous expression, FOXP2 induced by growth arrest is required for up-regulation of p21WAF1/CIP1. Upon growth factor withdrawal, FOXP2 induction occurs rapidly and precedes p21WAF1/CIP1 activation. Additionally, FOXP2 expression could be induced by MAPK pathway inhibition in growth-arrested 143B cells, but not in traditional cell line models of osteoblast differentiation (MG-63, C2C12, MC3T3-E1). Our data are consistent with a model in which transient upregulation of Foxp2 in pre-osteoblast mesenchymal cells regulates a p21-dependent growth arrest checkpoint, which may have implications for normal mesenchymal and osteosarcoma biology
  • Gazendam, L., Wartena, C., Malaise, V., Schreiber, G., De Jong, A., & Brugman, H. (2009). Automatic annotation suggestions for audiovisual archives: Evaluation aspects. Interdisciplinary Science Reviews, 34(2/3), 172-188. doi:10.1179/174327909X441090.

    Abstract

    In the context of large and ever growing archives, generating annotation suggestions automatically from textual resources related to the documents to be archived is an interesting option in theory. It could save a lot of work in the time consuming and expensive task of manual annotation and it could help cataloguers attain a higher inter-annotator agreement. However, some questions arise in practice: what is the quality of the automatically produced annotations? How do they compare with manual annotations and with the requirements for annotation that were defined in the archive? If different from the manual annotations, are the automatic annotations wrong? In the CHOICE project, partially hosted at the Netherlands Institute for Sound and Vision, the Dutch public archive for audiovisual broadcasts, we automatically generate annotation suggestions for cataloguers. In this paper, we define three types of evaluation of these annotation suggestions: (1) a classic and strict evaluation measure expressing the overlap between automatically generated keywords and the manual annotations, (2) a loosened evaluation measure for which semantically very similar annotations are also considered as relevant matches, and (3) an in-use evaluation of the usefulness of manual versus automatic annotations in the context of serendipitous browsing. During serendipitous browsing, the annotations (manual or automatic) are used to retrieve and visualize semantically related documents.

Share this page