The best things in life take time. That’s what we’ve been seeing again and again as we dig into the way that we pull information from memory during language processing. Three new papers show this in quite different ways.

Back in the day, I was enamored of showing that language users can implement dreadfully complex grammatical constraints in a  flash, and that we can generate accurate predictions extremely quickly. But at some point we realized that we tend to learn more from cases where processing is slower, because we then stand a better chance of pulling apart the mental computations that are involved.

Slower memory access. The first new paper, led by Brian Dillon, makes what on the face of it seems like a resoundingly boring claim. The Mandarin  reflexive ziji connects to local antecedents more quickly than it does to long-distance antecedents. That sounds pretty obvious, until you know the measure of speed and what other folks have found. This paper is our first foray into the world of speed-accuracy tradeoff (SAT). In most of the experiments that we do, we give the participants a task (make a judgment, read the sentence, answer a question, …), and they get to choose how quickly they do it. Not so in SAT, where participants are prompted to make a judgment at a specific time, and face dire consequences if they don’t respond right away. If the prompt comes too early, then participants have to just guess. If it comes much later, they perform at ceiling. In between those end points, sensitivity gradually increases with time, and all the action in SAT is in tracking the time course of that increase in sensitivity. SAT is a pain in the butt to administer, but its beauty is that it allows us to separate the question of how quickly task-relevant information becomes available from the question of the quality of the information that becomes available. This separation leads to surprising findings.

slow_ziji_figure

Most importantly for us, a series of studies by Brian McElree and his colleagues found that forming a long-distance linguistic dependency takes no longer than it does to form a short-distance dependency (here, and here). That sounds counterintuitive, but if you’re a fan of direct access, content-addressable memory architectures (CAM), then that’s exactly what you’d expect. What’s CAM, I hear you ask. It’s a memory architecture in which items in memory are accessed based on their content, and not based on their location in memory. In the domain of sentence structure, this means that a search process that looks for antecedents by traversing node-to-node through a tree structure isn’t available. The consequences for enforcing grammatical constraints are severe, as we’ve discussed in various places (Dave Kush’s dissertation, this paper, this review, etc.) That is why we have taken the previous SAT results so very seriously. Set against this backdrop, the finding that Mandarin ziji accesses a long-distance antecedent more slowly than a local antecedent really is news. Does it mean that we engage serial search processes? Or does it mean that long-distance reflexives are simply initially analyzed as local reflexives? That part is not yet resolved. But we’re looking forward to more discussion of this.

It’s gratifying to see this paper finally appear, as it has been in the works for a while. Brian started the project for an NSF-EAPSI proposal that he wrote as a first-year PhD student. Matt joined the project when he was a postdoc in SAT-central; he’s now a permanent fixture at UC Santa Cruz. Wing Yee jumped in when she was a fresh-faced new PhD student; she’s now on the faculty at UC London.

Look ahead in speaking. The second paper is another new area for us: language production. Here we’re led by Shota Momma, and we’re teaming up with Bob Slevc of UMD Psychology. The topic is something that I’ve been curious about for a long time: if you’re a speaker of a head-final language like Japanese, where the verb is the last word of the sentence, when do you decide what verb you’re going to use? Do you figure that out at the start of the sentence, and then you hold onto the verb in memory until you reach the end of the sentence? Or do you somehow spit out the other pieces of the message and while hoping that at the end you’ll be able to find a verb that fits? Mark Twain worried about related issues in his essay on The Awful German Language. There are good linguistic reasons to think that you need to commit to the verb early, since things like case markers depend on specific verbs choices. But there’s also some really good psycholinguistic evidence that speakers just wait.

slow_pwi_figure

The Picture-Word Interference (PWI) paradigm is a standard technique for studying word production. If I show you a picture of a dog and ask you to name it, you’ll be slower to respond if I simultaneously show you a related word (cat, or duck). In a clever extension of the PWI paradigm we can use the same technique to find out whether you’re planning ahead in speech. If I ask you to use a sentence to describe a scene, e.g., The man is chasing the dog, I can test whether an associate of dog, the last word in the sentence, disrupts the onset of speaking, even though the first words are unrelated to dog. I find this pretty cool. Antje Meyer showed in the 1990s that this technique works, and then a really interesting paper by Herbert Schriefers and friends found that German speakers show interference effects from verb associates when they’re starting a sentence with verb second order, but not when they’re starting a sentence with verb final order. That implies that Germans throw caution to the wind and say the NPs in a sentence, praying that a good verb will occur to them once they reach the end of the sentence.

The neat feature of Shota’s new work is that he took advantage of the properties of Japanese to test things that couldn’t be tested in German. When Japanese speakers were about to utter a subject, he found exactly the same thing as Schriefers: no evidence of verb planning. But when the same speakers were about to utter an object, he found clear evidence of verb planning.  So there is look-ahead in verb production, but it’s limited. In newer work in progress, we’re digging further into this finding: why a do speakers need to choose a verb before they say an object, but not before they say a subject? More on that coming soon.

Slow predictions in comprehension. The third new paper also fits the theme of things taking longer than you’d think. This one is led by Wing Yee Chow, and it’s collaboration with Suiping Wang of South China Normal U in Guanghzhou and with Ellen Lau. Predictive processes are all the rage in psycholinguistics these days, and in cognitive neuroscience more broadly. Lots of research shows that people are good at predicting upcoming material, and that this helps them to be faster and less noise sensitive. But we don’t know much about how we generate predictions. In the process of studying something quite different, we found that role-reversed sentences can provide a really useful window into the mechanisms of generating predictions. These are sentences like The thief.SUBJ the cop.OBJ arrest (in a language with suitable word order). The key property of sentences like this is that the verb arrest is strongly associated with situations involving cops and thieves, but not when the thief is the agent and the cop is the patient. We can use the ERP N400 component to probe verb predictions, and we can use role reversals to pull apart the contributions of word associations vs. structural roles.

slow_n400_figure

If you know anything about the N400, it’s probably the fact that it’s generally elicited by semantically anomalous sentences, e.g., I drink my coffee with cream and socks. There are probably hundreds of studies by now that fit the generalization that improbable words elicit larger N400s. In slightly more technical terms, N400 amplitude is inversely related to cloze probability. So the really interesting cases are the ones where N400 strength does not track cloze probability. And role reversals are just such a case: cop-thief-arrest is a lot more likely than thief-cop-arrest, but the N400 to the verb arrest is the same in both cases. Many different groups have found this, in different languages. The cool new finding in Wing Yee’s paper is that the connection between cloze and N400 can be restored by adding a little time, by inserting an extra phrase between the arguments and the verb. Predictions based on word associations can be exploited very quickly, but predictions based on argument roles take a little longer to act upon. We think that this opens the door to understanding how predictions are generated, and that’s something that we’re spending a lot of time exploring nowadays.

There’s more to come on all of these topics, but in each case what I find quite encouraging is that we’re learning the most in cases where people are not especially fast in using their linguistic knowledge. I used to give talks that had the rough message, “Hey, isn’t it cool how fast and clever language processing is!” But I didn’t have so much to say about how we achieve that. Now that we’re seeing more cases where we’re not quite so fast, I feel that we’re learning more about the underlying mechanisms.