If I’m hurriedly packing to go home to Blighty for a few days, I’ll likely pull out a raincoat. It’s a safe bet, right? But if I have a little more time on my hands, I might look up the detailed forecast, and find that it’s more likely to snow (a rare event in the UK), and so I’ll pack differently. Better predictions use more information, and can take a little more time. Two new papers from our team address the relation between prediction and time and learning.
We were early adopters in the surge of enthusiasm for prediction in language processing, in part due to our interest in Japanese (here) and in strangely fast ERP responses (here). One of the reasons why we’re able to understand language so well, despite speed, noise, etc., is that a lot of what is said is relatively predictable. But in the past couple of years we have become very interested in cases where predictions seem to take longer, specifically where some kinds of information influence predictions quickly, while others are used more slowly. This may open the door to understanding how predictions are generated.
The first new paper, “A bag-of-arguments mechanism for initial verb predictions” is led by Wing Yee Chow, together with Cybelle Smith, Ellen Lau and me. To my mind, the coolest finding in this paper is in Experiment 1, where we manage to distinguish the strength and the source of predictions for verbs. We found that equally strong predictions can lead to dramatically different effects, depending on the source of the prediction. Our measure of verb prediction is the ERP N400 component, which generally is smaller the more predictable the current word is. When you encounter a couple of noun phrase arguments and are trying to figure out what the following verb will be — this happens all the time in verb-final languages, and it happens in specific environments in English — your initial guesses are based on the identity of the nearby words, e.g., cheese + mouse are associated with the verb eat. But you don’t immediately take into consideration the argument role of the words, e.g., if cheese is the agent and mouse is the patient, then the verb eat is not so likely after all. But initially you don’t care about that. That sounds like a “bag of words” mechanism, right? But in Experiment 2 we show that it’s only the words in the current clause that seem to matter. That’s the “bag of arguments” part. We’re excited by this paper, because it shows clear evidence that some predictive information can be used more quickly than others.
The second new paper is an opinion(ated) article called “The role of language processing in language acquisition”, by me and Lara Ehrenhofer. It will be appearing in 2015 in a special issue of the journal Linguistic Approaches to Bilingualism as a target article with commentaries. If you’d like to write a commentary, drop the editors a line. I’m sure you’ll find something to annoy you in the article. It’s no secret that there has been a recent wave of interest in using the tools of language processing to study language learners (first language, second language, bilinguals). But this new way of looking at language learners hasn’t often made much connection to language learning, i.e., explaining how learners succeed or fail at, well, learning. We suggest what is needed to make that connection. From the abstract:
“We discuss three ways that language processing can be used in the service of understanding language acquisition. Level 1 approaches (“Processing in learners”) explore well known phenomena from the adult psycholinguistic literature and document how they play out in learner populations (child learners, adult learners, bilinguals). Level 2 approaches (“Learning effects as processing effects”) use insights from adult psycholinguistics to understand the language proficiency of learners. We argue that a rich body of findings that have been attributed to the development of the grammar of anaphora should instead be attributed to limitations in the learner’s language processing system. Level 3 approaches (“Explaining learning via processing”) use language processing to understand what it takes to successfully master the grammar of a language, and why different learner groups are more or less successful. We explore the role of language processing in why some grammatical phenomena are mastered late in children and not at all in adult learners. We examine the feasibility of the idea that children’s language learning prowess is directly caused by their processing limitations (“less is more”: Newport, 1990). We conclude that the idea is unlikely to be correct in its original form, but that a variant of the idea has some promise (“less is eventually more”).”
This paper certainly takes me out of my “comfort zone”, forcing me to do something that I always encourage students in our interdisciplinary programs to do. I have never attempted to say anything sensible about critical periods before — and you might decide that we haven’t yet succeeded. But the link between this paper and the other one is that the failures of prediction mechanisms might hold one key to understanding why some features of language are learned surprisingly late. Check out the paper to find out more.