Psycholinguistics I/II - 2022-2023

LING 640/641

Overview

This course is a year-long foundation course sequence in psycholinguistics, aimed at graduate students from any language science field. The course assumes no specific background in psycholinguistics, including experimentation or statistics. The first semester course also requires only limited background in formal linguistics. But all students should have a serious commitment to some area of language science, and relevant expertise that they can contribute to the class group.

Psycholinguistics is a broad field. In principle, it includes all areas of the mentalistic study of language, including the various fields of so-called formal/theoretical linguistics, plus language acquisition and the neuroscience of language. And while we’re at it, why not throw in language disorders and second language acquisition for good measure. Due to this breadth, psycholinguistics can sometimes appear like a scientific archipelago – many interesting but disconnected islands. We will make no attempt to tour all of these islands here. Instead, we will focus on trying to understand the overall space, how the pieces fit together, and recurring themes and problems. We will focus on:

  • Understanding the landscape of psycholinguistics
  • Psycholinguistic thinking: finding good questions, evaluating evidence, resolving conflicts
  • Doing psycholinguistics: tools needed to carry out psycholinguistic research

In the Fall semester (LING 640) we will devote a lot of time to ‘model’ problems, such as speech categorization and word recognition, because these relatively simple cases allow us to probe deeply into psycholinguistic issues with limited linguistic overhead.

In the Spring semester (LING 641) we will devote more attention to the relation between the syntax and semantics of sentences and language learning and language processing, both speaking and understanding.

Location, location, location

In person interaction is so valuable. We will do everything possible to maintain that. Online interaction is next best. Hybrid is a nut that has yet to be cracked.

For now, the semester is proceeding as ‘normal’, i.e., roughly as we used to, 3-4 years ago. We hope that it will remain this way, though there may be bumps along the way.  

Health

We hope that you will not be sidelined by COVID-19 during this semester. But it could happen. It probably will happen. If you get infected, UMD has extensive guidelines on how that impacts your class participation. We will try to take it all in our stride.

Remember that mental health is an important element of good health, especially for graduate students. Be aware, and seek help if needed. 

Connectivity

A research university thrives on connectivity. We have lost a great deal of that over the past 5 semesters. This is a time for rebuilding. We have learned a great deal over the course of the past couple of years.

We have seen fewer people. We have had fewer spontaneous encounters. We have had fewer shared experiences. So we have needed to take extra steps to be connected.

One tool that helped us last year was a class Slack channel, created within the “Maryland Psycholinguistics” workspace. It proved to be useful for sharing questions, documents, and class updates. It could be good to do this again.

Individual meetings

Individual and small group conversations are especially valuable right now. Seek them out!

I am very happy to have individual discussions. Just drop me a line.

I welcome opportunities for in person meetings. In 2020-22 I learned to greatly value outdoor meetings in addition to traditional indoor-with-computers meetings. The UMD campus is beautiful year round, and it is good for graduate students and faculty to be visibly active on campus. (Did you know that the entire campus is an arboretum? — Check out the University of Maryland Arboretum Explorer app.)

 

 

Schedule – Fall

Mondays & Wednesdays, 12:00 – 1:30.  1108B Marie Mount Hall.

August 29: Introduction. The psycholinguistic landscape

August 31: Some core concepts. Discussion of whistled languages.

September 7: Development of Speech Perception

September 12: Development of Speech Perception

September 14:  Becoming a native listener

September 19: Distributional learning

September 21: Learning contrasts

September 26:

September 28:

October 3: Neuroscience of speech perception and production

October 5: Word recognition

October 10: Active processing

October 12: Recognizing words in context

October 17:

October 19: Neuroscience of word recognition

October 24:

October 26: Word production

October 31:

November 2:

November 7:

November 9:

November 14:

November 16:

November 21:

November 23: NO CLASS – THANKSGIVING BREAK

November 28:

November 30:

December 5:

December 7:

December 12:

 

Requirements

This is graduate school. Your grade should not be your top concern here. You should be aiming to get a top grade, but your focus should be on using the course to develop the skills that will serve you well in your research. There will be no exams for this course. The focus of the course is on reading, discussing, writing and doing throughout the semester, and hence your entire grade will be based upon this.

Grades will be aligned with the values that guide this course: (i) active engagement with the core questions, (ii) thinking and writing clearly, (iii) taking risks and exploring new ideas, (iv) communicating and collaborating with others.

If you want to get the maximum benefit from this class (i.e. learn lots and have a grade to show for it at the end), you will do the following …

1. Come to class prepared, and participate (40% of grade).

Being prepared means having done some reading and thinking before coming to class. Writing down your thoughts or questions about the article(s) is likely to help. Although many readings are listed for this course, you are not expected to read them all from beginning to end. An important skill to develop is the ability to efficiently extract ideas and information from writing. Particpating in class discussions is valuable because it makes you an active learner and greatly increases the likelihood that you will understand and retain the material. You should also feel free to contact me outside of class with questions that you have about the material.

2. Think carefully and write clearly in assignments (60% of grade).

The assignments will come in a variety of formats. In lab assignments you will get hands-on experience with various research techniques in psycholinguistics, plus experience in reporting the results of those experiments. In writing assignments you will think and write about issues raised in class and in the assigned readings. The writing assignment will often be due before the material is discussed in class: this will help you to be better prepared for class and to form your own opinions in advance of class discussion. In your writing it is important to write clearly and provide support for claims that you make.

We will plan to have many shorter writing assignments, typically involving responses to questions about individual readings, for which you will have relatively limited time. These are not intended to be major writing assignments. But they will all be read, and they will contribute to your class grade, following the guiding values of the class.

If you are worried about how you are doing in the course, do not hesitate to contact me. Email is generally the most reliable way of reaching me.

Grade scale

 A 80-100%  B- 60-65%
 A- 75-80%  C+ 55-60%
 B+ 70-75%  C 50-55%
 B 65-70%  C- 45-50%

Note that even in the A range there is plenty of room for you to show extra initiative and insight. The threshold for A is deliberately set low, so that you have an opportunity to get additional credit for more creative work.

Teamwork

Written work should be submitted individually, unless the assignment guidelines state otherwise or you have made prior arrangements with the instructor, but you are strongly encouraged to work together on anything in this course. Academic honesty includes giving appropriate credit to collaborators.  Although collaboration is encouraged, collaboration should not be confused with writing up the results of a classmate’s work – this is unacceptable. If you work as a part of a group, you should indicate this at the top of your assignment when you submit it.

Assignments

Discussion note #5 [10/322]. Our next discussion focuses on some ways in which researchers have used electrophysiological measures (EEG/MEG) to try to understand how speech sounds are mentally represented/encoded. Näätänen et al. (1997), Kazanina et al. (2006), and Pelzl et al. (2021) are all examples of studies that rely on comparisons of speakers of different native languages. But they each adopt a different logic to do this. Choose (at least) two of the three studies, and address the following: (i) How do they differ (if at all) in terms of the sound encodings that they are seeking to clarify, and the experimental logic for how they achieve this? (ii) What (if anything) is the “value added” in these studies from the use of electrophysiological (EEG/MEG) measures, rather than behavioral measures? 

Two of the articles are short. The newest one is longer, and technically more up-to-date, but the core findings are fairly clear. A couple of clarifications. First, I am not seeking a specific answer on the second question. You might conclude that the more cumbersome method is beneficial or not. What I do want you to do is to think through the rationale of what one might hope to gain from the use of specific methods, as this is the kind of decision that one needs to make again and again in psycholinguistics. Second, although I was a co-author of one of the studies and the logic comes from an earlier study of mine (Phillips et al. 2000), feel free to challenge it. None of the authors are beholden to the claims made there. 

Discussion note #4 [9/26/22]. For a number of years, Naomi Feldman and colleagues have been digging into the problem of how infant sensitivity to speech categories might develop, despite lack of minimal pairs and lack of clearly distinct acoustic distributions. The aim of this discussion note is to try to situate this body of work in the context of developmental changes that we have already discussed. In particular, is there a role for early word learning in figuring out speech categories, according to Feldman et al. (Those views might be different in different papers.)

There are a few papers, which are linked to the Readings tab. You do not need to read and respond to all of them. Do read at least one of them, preferably Hitczenko & Feldman (2022), as it is brand new. Preferably look at at least one other. Feldman et al. (2009) adopts a very clear position on the importance of word learning, which may be at odds with later claims. (In that paper, if you can explain why Feldman argues that it’s easier to learn categories from a lexicon without minimal pairs, then you understand the main point.) That one is also the shortest. Of the two 2021 papers, the one in Open Mind is more accessible.

Each of the articles involves some amount of technical material about computational models, which may be more or less accessible to you. You do not need to follow all of that material in order to get the core ideas in the papers. (Also, I do not understand all of it, either. But feel free to discuss with classmates.)

Synthesis note #1 [9/21/22]. How accurate is it to say that infants have phonological category representations like older children and adults?

For purposes of this piece, “infants” can refer to any or all of ages from birth to 18 months. The main aim of this exercise is to think carefully about the relationship between behaviors and experimental evidence on the one hand, and the cognitive representations that are responsible for these behaviors. Your write up does not need to be long, but please explain as clearly as possible.

Discussion note #3 [9/14/22]: A very short paper by Stager & Werker (1997) reports four experiments on infants’ sensitivity to labels assigned to pictures. In one key experiment, 8-month olds appear to “outperform” 14-month olds. This seems counterintuitive. What is going on, and why is this an elegant demonstration? (The paper appeared in Nature, it has been cited over 1,000 times, and it has generated a lot of interesting subsequent work.)

Discussion Note #2 [9/7/22]: This discussion note is about Chomsky (1965, ch 1, pp. 3-15), and Marr, D (1982; pp. 8-28, esp. 20-28). Question: Marr’s discussion of visual perception highlights the “goal” of the computation. Does linguistic computation have a “goal”? Does Marr’s view on levels of analysis align with Chomsky’s contrasting of competence and performance?

I recommend to keep in mind the distinction that we discussed on Wednesday (8/31) between (i) Levels of analysis, (ii) Tasks, and (iii) Mechanisms.

Discussion note #1 [8/31/22]: Whistled languages are a striking example of the adaptation of human speech to different environments. Generally they are not distinct languages, but versions of spoken languages that are conveyed in whistled form. A recent review of whistled languages from around the world by Julien Meyer reveals striking similarities in how languages adapt to the whistled medium. This is also summarized in a broad audience piece (with demos!) by Bob Holmes in Knowable Magazine. Please answer the following questions: (i) Why does whistling force languages to limit the information that is conveyed to the listener? Describe a couple of regularities in how languages choose to do this. (ii) Are there psycholinguistic implications of how languages adapt to the whistled medium? In particular, do the adaptations seem more suited to helping speakers or helping listeners (or neither)?

Introduction [8/30/22]: Please send an email to colin@umd.edu, introducing yourself and relevant background and motivations. I’d love to know the following:

  1. What is your background in psycholinguistics, in (in)formal study or in hands-on experience. What kind of background do you have in experimentation, including knowledge of experiment control platforms, e.g., PC-Ibex, PsychoPy, or statistics/analysis tools, e.g., R, Excel?
  2. What, briefly, is your linguistic background, including languages where you have expertise?
  3. What are the outcomes that you hope to gain from the course (either by December ’22 or by May ’23)?

 

 

Notes

These are links to the slides used in the course. But note that they include some things that were not discussed in class, and in many cases the slides do not do justice to our extensive discussions in class.

Set 1: Scene setting

Set 2: Development of speech perception

Set 3: Electrophysiology and speech perception

 

 

Readings 

Introduction (Fall)

Whistled languages – something completely different … maybe

Meyer, J. (2021). Environmental and linguistic typology of whistled languages. Annual Review of Linguistics, 7, 493-510.

Holmes, B. (2021). Speaking in whistles. Knowable Magazine, publ. 8/16/21.

 

Higher level background

Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press. [chapter 1]

Marr, D. (1982). Vision. Cambridge, MA: MIT Press. [excerpt]

Jackendoff, R. (2002). Foundations of language. Oxford University Press. [chapter 1, chapter 2, chapter 3, chapter 4]

Lewis, S. & Phillips, C. (2015). Aligning grammatical theories and language processing modelsJournal of Psycholinguistic Research, 44, 27-46.

Momma, S. & Phillips, C. (2018). The relationship between parsing and generation. Annual Review of Linguistics, 4, 233-254

Speech Perception, Learning Sound Categories

Stager, C. & Werker, J. (1997). Infants listen for more phonetic detail in speech perception than word learning tasksNature, 388, 381-382. [This is one of the primary readings for the section of the course on phonetic/phonological representations. A very short, but very important study. Why are younger infants better than older infants, even on native-language contrasts?]

Hitczenko, K. & Feldman, N. (2022). Naturalistic speech supports distributional learning across contextsProceeding of the National Academies of Science, 119, e2123230119.

Feldman, N., Griffiths, T., & Morgan, J. (2009). Learning phonetic categories by learning a lexicon. Proceedings of the 31st Annual Meeting of the Cognitive Science Society.

Feldman, N, Goldwater, S., Dupoux, E., & Schatz, T. (2021). Do infants really learn phonetic categories? Open Mind, 5, 113-131.

Schatz, T., Feldman, N., Goldwater, S., Cao, X., & Dupoux, E. (2021). Early phonetic learning without phonetic categories: Insights from large-scale simulations on realistic inputProceedings of the National Academies of Science, 118.

Werker, J. (1994). Cross-language speech perception: Developmental change does not involve loss. In: Goodman & Nusbaum (eds.), The Development of Speech Perception. Cambridge, MA: MIT Press, pp:93-120. [Useful for Lab 1. This paper reviews in more details the reasons why Werker adopts a structure-adding view of phonetic development.]

Werker, J. (1995). Exploring developmental changes in cross-language speech perception. In L. Gleitman & M. Liberman (eds) Language: An Invitation to Cognitive Science, Vol 1 (2nd edn.), 87-106. [This paper is the best starting point for this section of the course. It presents an overview of Werker’s views on phonetic development up to 1995, including a straightforward study of her important cross-language experiments from the early 1980s.]

Werker, J. F., Pons, F., Dietrich, C., Kajikawa, S., Fais, L., & Amano, S. (2007). Infant-directed speech supports phonetic category learning in English and JapaneseCognition, 103, 147-162. [Analysis of what infants actually hear. It is presented as an argument for unsupervised distributional learning, but I suspect that it shows the opposite.]

Cognitive Neuroscience of Speech Perception

Näätänen et al. 1997. Language-specific phoneme representations revealed by electric and magnetic brain responsesNature, 385, 432-434.

Kazanina, N., Phillips, C., & Idsardi, W. 2006. The influence of meaning on the perception of speech soundsProceedings of the National Academy of Sciences, 103, 11381-11386.

Pelzl, E., Lau, E., Guo, T., & DeKeyer, R. 2021. Even in the best case scenario L2 learners have persistent difficulty perceiving and utilizing tones in Mandarin. Studies in Second Language Acquisition, 43, 268-296.

van Turennout, M., Hagoort, P., & Brown, C. 1998. Brain activity during speaking: from syntax to phonology in 40 millisecondsScience, 280, 572-574.

Word Recognition

An accessible introduction to some foundational concepts and findings: 

Altmann, G. 1997. Words and how we (eventually) find them. Chapter 6 of The Ascent of Babel. Oxford University Press. [A good introductory chapter.]

Some recommended readings for class discussion.

Chen, L. & Boland, J. 2008. Dominance and context effects on activation of alternative homophone meaningsMemory and Cognition, 36, 1306-1323.

Magnuson, J., Mirman, D., & Myers, E. 2013. Spoken word recognition. In D. Reisberg (ed.), The Oxford Handbook of Cognitive Psychology, p. 412-441. Oxford University Press.

Gaston, P., Lau, E., & Phillips, C. 2020. How does(n’t) syntactic context guide auditory word recognition. Submitted.

Lau, E., Phillips, C., & Poeppel, D. 2008. A cortical network for semantics: (de)constructing the N400. Nature Reviews Neuroscience, 9, 920-933.

Federmeier, K. & Kutas, M. 1999. Right words and left words: electrophysiological evidence for hemispheric differences in meaning processingCognitive Brain Research 8, 373-392.

Ness, T. & Meltzer-Asscher, A. 2021. Love thy neighbor: Facilitation and inhibition in the competition between parallel predictionsCognition 207, 104509.

Staub, A., Grant, M., Astheimer, L., & Cohen, A. 2015. The influence of cloze probability and item constraint on cloze task response timeJournal of Memory and Language 82, 1-17.

Some seminal papers discussed in class.

Marslen-Wilson, W. 1975. Sentence perception as an interactive parallel processScience, 189, 226-228 

Marslen-Wilson, W. 1987. Functional parallelism in spoken word recognitionCognition, 25, 71-102.

Boland, J. and Cutler, A. 1996. Interaction with autonomy: Multiple output models and the inadequacy of the Great DivideCognition, 58-309-320.

Dahan, D., Magnuson, J., & Tanenhaus, M. 2001. Time course of frequency effects in spoken word recognition: Evidence from eye-movementsCognitive Psychology, 42, 317-367.

Kutas, M. & Federmeier, K. 2000. Electrophysiology reveals semantic memory use in language comprehension. Trends in Cognitive Sciences, 4, 463-470