Muller, Hanna: What could go wrong? Linguistic illusions and incremental interpretation.. 2022. @phdthesis{Muller2022,
title = {What could go wrong? Linguistic illusions and incremental interpretation.},
author = {Hanna Muller},
url = {http://www.colinphillips.net/wp-content/uploads/2022/08/muller2022-1.pdf, What could go wrong? Linguistic illusions and incremental interpretation.},
year = {2022},
date = {2022-07-29},
urldate = {2022-07-29},
abstract = {The systems underlying incremental sentence comprehension are, in general, highly successful — comprehenders typically understand sentences of their native language quickly and accurately. The occa- sional failure of the system to deliver an appropriate representation of a sentence is therefore potentially illuminating. There are many ways the comprehender’s general success could in principle be accomplished; the systematic pattern of failures places some constraints on the possible algorithms. This dissertation explores two cases of systematic failure, negative polarity illusions and substitution illusions (sometimes called “Moses illusions”) with the goal of identifying the specific circumstances under which the illusion arises, and, as a consequence, the specific constraints placed on possible implementations of linguistic knowledge.
In the first part of this dissertation, I explore the profile of the negative polarity illusion, a case in which a sentence containing an unlicensed negative polarity item and a preceding, but not structurally-relevant licensor is perceived as if it is acceptable, at least in early stages of processing. I consider various proposals for the grammatical knowledge that determines the restricted distribution of negative polarity
items, and possible algorithms for using that grammatical knowledge in real time to process a sentence containing a negative polarity item. I also discuss possible parallels between negative polarity illusions and superficially-similar illusory phenomena in other domains, such as subject-verb agreement. Across sixteen experiments, I show that the profile of the illusion is more restricted than previously thought. Illu- sions do not always arise when an unlicensed negative polarity item is preceded by a structurally-irrelevant licensor, and the circumstances under which they do arise are quite specific. These findings suggest that the negative polarity illusion may be meaningfully distinct from other illusory phenomena, though this conclusion does not necessarily require stipulating a separate mechanism for every illusion. I discuss the implications of these findings for possible real-time implementations of grammatical knowledge.
In the second part of this dissertation, I turn to the substitution illusion, a case in which a word in a trivia fact is swapped out for another word, making the sentence a world knowledge violation, but comprehenders do not consciously detect the anomalous nature of the sentence. Here I attempt to develop specific and testable hypotheses about the source of the illusion, paying particular attention to how the same mechanism that “fails” in illusion sentences (in that it does not allow the comprehender to detect the anomaly) serves the comprehender well in other circumstances. I demonstrate that the substitution illusion, like the negative polarity illusion, is more restricted than previously thought — some stimuli yield very high illusion rates while others yield very low illusion rates, and this variability appears to be non-random. In seven experiments, I pursue both a correlational approach and an experimental manip- ulation of illusion rates, in order to narrow the space of possible explanations for the illusion.
These investigations collectively demonstrate that occasional errors in comprehension do not necessarily reflect the use of “shortcuts” in sentence processing, and can be explained by the interaction of the linguistic system with non-linguistic components of the cognitive architecture, such as memory and attention. While neither illusion phenomenon is ultimately fully explained, the research presented here constitutes an important step forward in our understanding of both domains and their broader implica- tions.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
The systems underlying incremental sentence comprehension are, in general, highly successful — comprehenders typically understand sentences of their native language quickly and accurately. The occa- sional failure of the system to deliver an appropriate representation of a sentence is therefore potentially illuminating. There are many ways the comprehender’s general success could in principle be accomplished; the systematic pattern of failures places some constraints on the possible algorithms. This dissertation explores two cases of systematic failure, negative polarity illusions and substitution illusions (sometimes called “Moses illusions”) with the goal of identifying the specific circumstances under which the illusion arises, and, as a consequence, the specific constraints placed on possible implementations of linguistic knowledge.
In the first part of this dissertation, I explore the profile of the negative polarity illusion, a case in which a sentence containing an unlicensed negative polarity item and a preceding, but not structurally-relevant licensor is perceived as if it is acceptable, at least in early stages of processing. I consider various proposals for the grammatical knowledge that determines the restricted distribution of negative polarity
items, and possible algorithms for using that grammatical knowledge in real time to process a sentence containing a negative polarity item. I also discuss possible parallels between negative polarity illusions and superficially-similar illusory phenomena in other domains, such as subject-verb agreement. Across sixteen experiments, I show that the profile of the illusion is more restricted than previously thought. Illu- sions do not always arise when an unlicensed negative polarity item is preceded by a structurally-irrelevant licensor, and the circumstances under which they do arise are quite specific. These findings suggest that the negative polarity illusion may be meaningfully distinct from other illusory phenomena, though this conclusion does not necessarily require stipulating a separate mechanism for every illusion. I discuss the implications of these findings for possible real-time implementations of grammatical knowledge.
In the second part of this dissertation, I turn to the substitution illusion, a case in which a word in a trivia fact is swapped out for another word, making the sentence a world knowledge violation, but comprehenders do not consciously detect the anomalous nature of the sentence. Here I attempt to develop specific and testable hypotheses about the source of the illusion, paying particular attention to how the same mechanism that “fails” in illusion sentences (in that it does not allow the comprehender to detect the anomaly) serves the comprehender well in other circumstances. I demonstrate that the substitution illusion, like the negative polarity illusion, is more restricted than previously thought — some stimuli yield very high illusion rates while others yield very low illusion rates, and this variability appears to be non-random. In seven experiments, I pursue both a correlational approach and an experimental manip- ulation of illusion rates, in order to narrow the space of possible explanations for the illusion.
These investigations collectively demonstrate that occasional errors in comprehension do not necessarily reflect the use of “shortcuts” in sentence processing, and can be explained by the interaction of the linguistic system with non-linguistic components of the cognitive architecture, such as memory and attention. While neither illusion phenomenon is ultimately fully explained, the research presented here constitutes an important step forward in our understanding of both domains and their broader implica- tions. |
Gaston, Phoebe: The role of syntactic prediction in auditory word recognition. University of Maryland, 2020. @phdthesis{Gaston2020,
title = {The role of syntactic prediction in auditory word recognition},
author = {Phoebe Gaston},
url = {http://www.colinphillips.net/wp-content/uploads/2020/08/gaston2020.pdf, The role of syntactic prediction in auditory word recognition},
year = {2020},
date = {2020-07-30},
school = {University of Maryland},
abstract = {Context is widely understood to have some influence on how words are recognized from speech. This dissertation works toward a mechanistic account of how contextual influence occurs, looking deeply at what would seem to be a very simple instance of the problem: what happens when lexical candidates match with auditory input but do not fit with the syntactic context. There is, however, considerable conflict in the existing literature on this question. Using a combination of modelling and experimental work, I investigate both the generation of abstract syntactic predictions from sentence context and the mechanism by which those predictions impact auditory word recognition. In the first part of this dissertation, simulations in jTRACE show that the speed with which changes in lexical activation can be observed in dependent measures should depend on the size and composition of the set of response candidates allowed by the task. These insights inform a new design for the visual world paradigm that ensures that activation can be detected from words that are bad contextual fits, and that facilitatory and inhibitory mechanisms for the syntactic category constraint can be distinguished. This study finds that wrong-category words are activated, a result that is incompatible with an inhibitory syntactic category constraint. I then turn to a different approach to studying lexical activation, using information-theoretic properties of the set of words consistent with the auditory input while neural activity is recorded in MEG. Phoneme surprisal and cohort entropy are evaluated as predictors of the neural response to hearing single words when that response is modeled with temporal response functions. This lays the groundwork for a design that can test different versions of surprisal and entropy, incorporating facilitatory or inhibitory syntactic constraints on lexical activation when the stimuli are short sentences. Finally, I investigate a neural effect in MEG previously thought to reflect syntactic prediction during reading. When lexical predictability is minimized in a new study, there is no longer evidence for structural prediction occurring at the beginning of sentences. This supports the possibility of a tighter link between syntactic and lexical processing.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Context is widely understood to have some influence on how words are recognized from speech. This dissertation works toward a mechanistic account of how contextual influence occurs, looking deeply at what would seem to be a very simple instance of the problem: what happens when lexical candidates match with auditory input but do not fit with the syntactic context. There is, however, considerable conflict in the existing literature on this question. Using a combination of modelling and experimental work, I investigate both the generation of abstract syntactic predictions from sentence context and the mechanism by which those predictions impact auditory word recognition. In the first part of this dissertation, simulations in jTRACE show that the speed with which changes in lexical activation can be observed in dependent measures should depend on the size and composition of the set of response candidates allowed by the task. These insights inform a new design for the visual world paradigm that ensures that activation can be detected from words that are bad contextual fits, and that facilitatory and inhibitory mechanisms for the syntactic category constraint can be distinguished. This study finds that wrong-category words are activated, a result that is incompatible with an inhibitory syntactic category constraint. I then turn to a different approach to studying lexical activation, using information-theoretic properties of the set of words consistent with the auditory input while neural activity is recorded in MEG. Phoneme surprisal and cohort entropy are evaluated as predictors of the neural response to hearing single words when that response is modeled with temporal response functions. This lays the groundwork for a design that can test different versions of surprisal and entropy, incorporating facilitatory or inhibitory syntactic constraints on lexical activation when the stimuli are short sentences. Finally, I investigate a neural effect in MEG previously thought to reflect syntactic prediction during reading. When lexical predictability is minimized in a new study, there is no longer evidence for structural prediction occurring at the beginning of sentences. This supports the possibility of a tighter link between syntactic and lexical processing. |
Malko, Anton: The role of structural information in the resolution of long-distance dependencies. University of Maryland, 2018. @phdthesis{Malko2018,
title = {The role of structural information in the resolution of long-distance dependencies},
author = {Anton Malko},
url = {http://www.colinphillips.net/wp-content/uploads/2020/08/malko2018.pdf, The role of structural information in the resolution of long-distance dependencies},
year = {2018},
date = {2018-12-18},
school = {University of Maryland},
abstract = {The main question that this thesis addresses is: in what way does structural information enter into the processing of long-distance dependencies? Does it constrain the computations, and if so, to what degree? Available experimental evidence suggests that sometimes structurally illicit but otherwise suitable constituents are accessed during dependency resolution. Subject-verb agreement is a prime example (Wagers et al., 2009; Dillon et al., 2013), and similar effects were reported for negative polarity items (NPIs) licensing (Vasishth et al., 2008) and reflexive pronouns resolution (Parker and Phillips, 2017; Sloggett, 2017). Prima facie this evidence suggests that structural information fails to perfectly constrain real-time language processing to be in line with grammatical constraints. This conclusion would fall neatly in line with an assumption that human sentence processing relies on cue-based memory (e.g McElree et al., 2003; Lewis and Vasishth, 2005; Van Dyke and Johns, 2012; Wagers et al., 2009, a.m.o.), the key property of which is the fragility of memory search, which can return irrelevant results if they look similar enough to the relevant ones. The attractiveness of such an approach lies in its parsimony: there is independent evidence that general purpose working memory is cue-based (Jonides et al., 2008), so we do not need to postulate any language specific mechanisms. Additionally, the processing of multiple linguistic dependencies can be analyzed within the same theoretical framework. Cue-based approach has also been argued to be the best one in terms of its empirical coverage: some of the experimental evidence was assumed to only be explainable within it (the absence of ungrammaticality illusions in subject-verb agreement is the main example, to which we will return in more detail later). However, recently several other approaches have been suggested which would be able to account for these cases (Eberhard et al., 2005; Xiang et al., 2013; Sloggett, 2017; Hammerly et al., draft.april.2018). These approaches usually assume separate processing mechanisms for different linguistic dependencies, and thus lose the parsimonious at- tractiveness of cue-based memory models. They also take a different stance on the role of structural information in real-time language processing, assuming that structural cues do accurately guide the dependency resolution. A priori there is no reason why they could not turn out to be true. But given the theoretical attractiveness of cue-based models in which structural information does not categorically restrain processing, it is important to critically evaluate these recent claims. In this thesis, we focus on reflexive pronouns and on the novel pattern reported in Parker and Phillips (2017) and Sloggett (2017): the finding that reflexive pronouns are sensitive to the properties of structurally inaccessible antecedents in some specific conditions (interference effect). The two works report consistent findings, but the accounts they give take opposite perspectives on the role of structural information in reflexive resolution. Our aim in this thesis is to assess the reliability of these findings and to experimentally investigate cases which would hopefully provide clearer evidence on how the structure guides reflexives processing. To this aim, we conduct two direct replications of Parker and Phillips (2017) and four novel experiments further investigating the properties of the interference effect. None of the six experiments provided strong statistical support for the previous findings. After ruling out several possible confounds and analyzing numerical patterns (which go in the expected direction and are consistent with previous results), we conclude that interference effect is likely real, but may be less strong than the previous studies would lead to believe. These results can be used for setting more realistic expectations for future studies regarding the size of the effect and statistical power necessary to detect it. With respect to our main goal of distinguishing between cue-based and alternative accounts of the interference effect, we tentatively conclude that cue-based approaches are preferred; however, one has to assume that some structural features are able to categorically rule out illicit antecedents. Further highly powered studies are necessary to verify and confirm these conclusions.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
The main question that this thesis addresses is: in what way does structural information enter into the processing of long-distance dependencies? Does it constrain the computations, and if so, to what degree? Available experimental evidence suggests that sometimes structurally illicit but otherwise suitable constituents are accessed during dependency resolution. Subject-verb agreement is a prime example (Wagers et al., 2009; Dillon et al., 2013), and similar effects were reported for negative polarity items (NPIs) licensing (Vasishth et al., 2008) and reflexive pronouns resolution (Parker and Phillips, 2017; Sloggett, 2017). Prima facie this evidence suggests that structural information fails to perfectly constrain real-time language processing to be in line with grammatical constraints. This conclusion would fall neatly in line with an assumption that human sentence processing relies on cue-based memory (e.g McElree et al., 2003; Lewis and Vasishth, 2005; Van Dyke and Johns, 2012; Wagers et al., 2009, a.m.o.), the key property of which is the fragility of memory search, which can return irrelevant results if they look similar enough to the relevant ones. The attractiveness of such an approach lies in its parsimony: there is independent evidence that general purpose working memory is cue-based (Jonides et al., 2008), so we do not need to postulate any language specific mechanisms. Additionally, the processing of multiple linguistic dependencies can be analyzed within the same theoretical framework. Cue-based approach has also been argued to be the best one in terms of its empirical coverage: some of the experimental evidence was assumed to only be explainable within it (the absence of ungrammaticality illusions in subject-verb agreement is the main example, to which we will return in more detail later). However, recently several other approaches have been suggested which would be able to account for these cases (Eberhard et al., 2005; Xiang et al., 2013; Sloggett, 2017; Hammerly et al., draft.april.2018). These approaches usually assume separate processing mechanisms for different linguistic dependencies, and thus lose the parsimonious at- tractiveness of cue-based memory models. They also take a different stance on the role of structural information in real-time language processing, assuming that structural cues do accurately guide the dependency resolution. A priori there is no reason why they could not turn out to be true. But given the theoretical attractiveness of cue-based models in which structural information does not categorically restrain processing, it is important to critically evaluate these recent claims. In this thesis, we focus on reflexive pronouns and on the novel pattern reported in Parker and Phillips (2017) and Sloggett (2017): the finding that reflexive pronouns are sensitive to the properties of structurally inaccessible antecedents in some specific conditions (interference effect). The two works report consistent findings, but the accounts they give take opposite perspectives on the role of structural information in reflexive resolution. Our aim in this thesis is to assess the reliability of these findings and to experimentally investigate cases which would hopefully provide clearer evidence on how the structure guides reflexives processing. To this aim, we conduct two direct replications of Parker and Phillips (2017) and four novel experiments further investigating the properties of the interference effect. None of the six experiments provided strong statistical support for the previous findings. After ruling out several possible confounds and analyzing numerical patterns (which go in the expected direction and are consistent with previous results), we conclude that interference effect is likely real, but may be less strong than the previous studies would lead to believe. These results can be used for setting more realistic expectations for future studies regarding the size of the effect and statistical power necessary to detect it. With respect to our main goal of distinguishing between cue-based and alternative accounts of the interference effect, we tentatively conclude that cue-based approaches are preferred; however, one has to assume that some structural features are able to categorically rule out illicit antecedents. Further highly powered studies are necessary to verify and confirm these conclusions. |
Ettinger, Allyson: Relating lexical and syntactic processes in language: Bridging research in humans and machines. University of Maryland, 2018. @phdthesis{Ettinger2018b,
title = {Relating lexical and syntactic processes in language: Bridging research in humans and machines},
author = {Allyson Ettinger},
url = {http://www.colinphillips.net/wp-content/uploads/2020/08/ettinger2018.pdf, Relating lexical and syntactic processes in language: Bridging research in humans and machines},
year = {2018},
date = {2018-07-26},
school = {University of Maryland},
abstract = {Potential to bridge research on language in humans and machines is substantial—as linguists and cognitive scientists apply scientific theory and methods to understand how language is pro- cessed and represented by humans, computer scientists apply computational methods to determine how to process and represent language in machines. The present work integrates approaches from each of these domains in order to tackle an issue of relevance for both: the nature of the rela- tionship between low-level lexical processes and syntactically-driven interpretation processes. In the first part of the dissertation, this distinction between lexical and syntactic processes focuses on understanding asyntactic lexical effects in online sentence comprehension in humans, and the relationship of those effects to syntactically-driven interpretation processes. I draw on computa- tional methods for simulating these lexical effects and their relationship to interpretation processes. In the latter part of the dissertation, the lexical/syntactic distinction is focused on the application of semantic composition to complex lexical content, for derivation of sentence meaning. For this work I draw on methodology from cognitive neuroscience and linguistics to analyze the capacity of natural language processing systems to do vector-based sentence composition, in order to improve the capacities of models to compose and represent sentence meaning.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Potential to bridge research on language in humans and machines is substantial—as linguists and cognitive scientists apply scientific theory and methods to understand how language is pro- cessed and represented by humans, computer scientists apply computational methods to determine how to process and represent language in machines. The present work integrates approaches from each of these domains in order to tackle an issue of relevance for both: the nature of the rela- tionship between low-level lexical processes and syntactically-driven interpretation processes. In the first part of the dissertation, this distinction between lexical and syntactic processes focuses on understanding asyntactic lexical effects in online sentence comprehension in humans, and the relationship of those effects to syntactically-driven interpretation processes. I draw on computa- tional methods for simulating these lexical effects and their relationship to interpretation processes. In the latter part of the dissertation, the lexical/syntactic distinction is focused on the application of semantic composition to complex lexical content, for derivation of sentence meaning. For this work I draw on methodology from cognitive neuroscience and linguistics to analyze the capacity of natural language processing systems to do vector-based sentence composition, in order to improve the capacities of models to compose and represent sentence meaning. |
Ehrenhofer, Lara: Argument roles in adult and child comprehension. University of Maryland, 2018. @phdthesis{Ehrenhofer2018,
title = {Argument roles in adult and child comprehension},
author = {Lara Ehrenhofer},
url = {http://www.colinphillips.net/wp-content/uploads/2020/08/ehrenhofer2018.pdf, Argument roles in adult and child comprehension},
year = {2018},
date = {2018-07-25},
school = {University of Maryland},
abstract = {Language comprehension requires comprehenders to commit rapidly to interpretations based on incremental and occasionally misleading input. This is especially difficult in the case of argument roles, which may be more or less useful depending on whether comprehenders also have access to verb information. In children, a combination of subject-as-agent parsing biases and difficulty with revising initial misinterpretations may be the source of persistent misunderstandings of passives, in which subjects are not agents. My experimental investigation contrasted German five-year-olds’ argument role assignment in passives in a task that combined act-out and eye-tracking measures. Manipulating the order of subject and voice (Exp. 4.1, 4.3) did not impact German learners’ success in comprehending passives, but providing the cue to voice after the main verb (Exp. 4.2) led to a steep drop in children’s comprehension outcomes, suggesting that the inclusion of verb information impacts how young comprehenders process argument role information. In adults, many studies have found that although argument role reversals create strong contrasts in offline cloze probability, they do not elicit N400 contrasts. This may be because in the absence of a main verb, the parser is unable to use argument role information. In an EEG experiment (Exp. 5.1), we used word order to manipulate the presence or absence of verb information, contrasting noun-noun-verb reversals (NNV; which cowboy the bull had ridden) with noun-verb-noun reversals (NVN; which horse had raced the jockey). We found an N400 contrast in NVN contexts, as predicted, but surprisingly, we also found an N400 contrast in NNV contexts. Unlike previous experimental materials, our stimuli were designed to elicit symmetrically strong and distinct verb predictions with both canonical and reversed argument role assignments. These data suggest that adult comprehenders are able to overcome the absence of a main verb when probability distributions over combined verb-argument role information can contribute to generating role-specific verb candidates.
The overall investigation suggests that prediction and comprehension of argument role information is impacted by the presence or absence of verb information, which may allow comprehenders to bridge the divide between linguistic representations and world knowledge in real-time processing.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Language comprehension requires comprehenders to commit rapidly to interpretations based on incremental and occasionally misleading input. This is especially difficult in the case of argument roles, which may be more or less useful depending on whether comprehenders also have access to verb information. In children, a combination of subject-as-agent parsing biases and difficulty with revising initial misinterpretations may be the source of persistent misunderstandings of passives, in which subjects are not agents. My experimental investigation contrasted German five-year-olds’ argument role assignment in passives in a task that combined act-out and eye-tracking measures. Manipulating the order of subject and voice (Exp. 4.1, 4.3) did not impact German learners’ success in comprehending passives, but providing the cue to voice after the main verb (Exp. 4.2) led to a steep drop in children’s comprehension outcomes, suggesting that the inclusion of verb information impacts how young comprehenders process argument role information. In adults, many studies have found that although argument role reversals create strong contrasts in offline cloze probability, they do not elicit N400 contrasts. This may be because in the absence of a main verb, the parser is unable to use argument role information. In an EEG experiment (Exp. 5.1), we used word order to manipulate the presence or absence of verb information, contrasting noun-noun-verb reversals (NNV; which cowboy the bull had ridden) with noun-verb-noun reversals (NVN; which horse had raced the jockey). We found an N400 contrast in NVN contexts, as predicted, but surprisingly, we also found an N400 contrast in NNV contexts. Unlike previous experimental materials, our stimuli were designed to elicit symmetrically strong and distinct verb predictions with both canonical and reversed argument role assignments. These data suggest that adult comprehenders are able to overcome the absence of a main verb when probability distributions over combined verb-argument role information can contribute to generating role-specific verb candidates.
The overall investigation suggests that prediction and comprehension of argument role information is impacted by the presence or absence of verb information, which may allow comprehenders to bridge the divide between linguistic representations and world knowledge in real-time processing. |
Momma, Shota: Parsing, generation, and grammar. University of Maryland, 2016. @phdthesis{Momma2016c,
title = {Parsing, generation, and grammar},
author = {Shota Momma},
url = {http://www.colinphillips.net/wp-content/uploads/2016/08/momma2016.pdf, Parsing generation and grammar},
year = {2016},
date = {2016-07-29},
school = {University of Maryland},
abstract = {Humans use their grammatical knowledge in more than one way. On one hand, they use it to understand what others say. On the other hand, they use it to say what they want to convey to others (or to themselves). In either case, they need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same cognitive mechanisms. Currently, the field of psycholinguistics implicitly adopts the position that they are supported by different cognitive mechanisms, as evident from the fact that most psycholinguistic models seek to explain either comprehension or production phenomena. The potential existence of two independent cognitive systems underlying linguistic performance doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. This thesis thus aims to unify the structure building system in comprehension, i.e., parser, and the structure building system in production, i.e., generator, into one, so that the linking theory between knowledge and performance can also be unified into one. I will discuss and unify both existing and new data pertaining to how structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is at least a plausible research enterprise. In Chapter 1, I will discuss the previous and current views on how parsing and generation are related to each other. I will outline the challenges for the current view that the parser and the generator are the same cognitive mechanism. This single system view is discussed and evaluated in the rest of the chapters. In Chapter 2, I will present new experimental evidence suggesting that the grain size of the pre-compiled structural units (henceforth simply structural units) is rather small, contrary to some models of sentence production. In particular, I will show that the internal structure of the verb phrase in a ditransitive sentence (e.g., The chef is donating the book to the monk) is not specified at the onset of speech, but is specified before the first internal argument (the book) needs to be uttered. I will also show that this timing of structural processes with respect to the verb phrase structure is earlier than the lexical processes of verb internal arguments. These two results in concert show that the size of structure building units in sentence production is rather small, contrary to some models of sentence production, yet structural processes still precede lexical processes. I argue that this view of generation resembles the widely accepted model of parsing that utilizes both top-down and bottom-up structure building procedures. In Chapter 3, I will present new experimental evidence suggesting that the structural representation strongly constrains the subsequent lexical processes. In particular, I will show that conceptually similar lexical items interfere with each other only when they share the same syntactic category in sentence production. The mechanism that I call syntactic gating, will be proposed, and this mechanism characterizes how the structural and lexical processes interact in generation. I will present two Event Related Potential (ERP) experiments that show that the lexical retrieval in (predictive) comprehension is also constrained by syntactic categories. I will argue that the syntactic gating mechanism is operative both in parsing and generation, and that the interaction between structural and lexical processes in both parsing and generation can be characterized in the same fashion. In Chapter 4, I will present a series of experiments examining the timing at which verbs’ lexical representations are planned in sentence production. It will be shown that verbs are planned before the articulation of their internal arguments, regardless of the target language (Japanese or English) and regardless of the sentence type (active object- initial sentence in Japanese, passive sentences in English, and unaccusative sentences in English). I will discuss how this result sheds light on the notion of incrementality in generation. In Chapter 5, I will synthesize the experimental findings presented in this thesis and in previous research to address the challenges to the single system view I outlined in Chapter 1. I will then conclude by presenting a preliminary single system model that can potentially capture both the key sentence comprehension and sentence production data without assuming distinct mechanisms for each.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Humans use their grammatical knowledge in more than one way. On one hand, they use it to understand what others say. On the other hand, they use it to say what they want to convey to others (or to themselves). In either case, they need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same cognitive mechanisms. Currently, the field of psycholinguistics implicitly adopts the position that they are supported by different cognitive mechanisms, as evident from the fact that most psycholinguistic models seek to explain either comprehension or production phenomena. The potential existence of two independent cognitive systems underlying linguistic performance doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. This thesis thus aims to unify the structure building system in comprehension, i.e., parser, and the structure building system in production, i.e., generator, into one, so that the linking theory between knowledge and performance can also be unified into one. I will discuss and unify both existing and new data pertaining to how structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is at least a plausible research enterprise. In Chapter 1, I will discuss the previous and current views on how parsing and generation are related to each other. I will outline the challenges for the current view that the parser and the generator are the same cognitive mechanism. This single system view is discussed and evaluated in the rest of the chapters. In Chapter 2, I will present new experimental evidence suggesting that the grain size of the pre-compiled structural units (henceforth simply structural units) is rather small, contrary to some models of sentence production. In particular, I will show that the internal structure of the verb phrase in a ditransitive sentence (e.g., The chef is donating the book to the monk) is not specified at the onset of speech, but is specified before the first internal argument (the book) needs to be uttered. I will also show that this timing of structural processes with respect to the verb phrase structure is earlier than the lexical processes of verb internal arguments. These two results in concert show that the size of structure building units in sentence production is rather small, contrary to some models of sentence production, yet structural processes still precede lexical processes. I argue that this view of generation resembles the widely accepted model of parsing that utilizes both top-down and bottom-up structure building procedures. In Chapter 3, I will present new experimental evidence suggesting that the structural representation strongly constrains the subsequent lexical processes. In particular, I will show that conceptually similar lexical items interfere with each other only when they share the same syntactic category in sentence production. The mechanism that I call syntactic gating, will be proposed, and this mechanism characterizes how the structural and lexical processes interact in generation. I will present two Event Related Potential (ERP) experiments that show that the lexical retrieval in (predictive) comprehension is also constrained by syntactic categories. I will argue that the syntactic gating mechanism is operative both in parsing and generation, and that the interaction between structural and lexical processes in both parsing and generation can be characterized in the same fashion. In Chapter 4, I will present a series of experiments examining the timing at which verbs’ lexical representations are planned in sentence production. It will be shown that verbs are planned before the articulation of their internal arguments, regardless of the target language (Japanese or English) and regardless of the sentence type (active object- initial sentence in Japanese, passive sentences in English, and unaccusative sentences in English). I will discuss how this result sheds light on the notion of incrementality in generation. In Chapter 5, I will synthesize the experimental findings presented in this thesis and in previous research to address the challenges to the single system view I outlined in Chapter 1. I will then conclude by presenting a preliminary single system model that can potentially capture both the key sentence comprehension and sentence production data without assuming distinct mechanisms for each. |
Chacón, Dustin Alfonso: Comparative psychosyntax. University of Maryland, 2015. @phdthesis{Chacón2015b,
title = {Comparative psychosyntax},
author = {Dustin Alfonso Chacón},
url = {http://www.colinphillips.net/wp-content/uploads/2015/08/chacon2015.pdf, Comparative psycholinguistics},
year = {2015},
date = {2015-08-03},
school = {University of Maryland},
abstract = {Every difference between languages is a “choice point” for the syntactician, psycholinguist, and language learner. The syntactician must describe the differences in representations that the grammars of different languages can assign. The psycholinguist must describe how the comprehension mechanisms search the space of the representations permitted by a grammar to quickly and effortlessly understand sentences in real time. The language learner must determine which representations are permitted in her grammar on the basis of her primary linguistic evidence. These investigations are largely pursued independently, and on the basis of qualitatively different data. In this dissertation, I show that these investigations can be pursued in a way that is mutually informative. Specifically, I show how learnability concerns and sentence processing data can constrain the space of possible analyses of language differences.
In Chapter 2, I argue that “indirect learning”, or abstract, cross-contruction syntactic inference, is necessary in order to explain how the learner determines which complementizers can co-occur with subjects gaps in her target grammar. I show that adult speakers largely converge in the robustness of the that-trace effect, a constraint on complementation complementizers and subject gaps observed in languages like English, but unobserved in languages like Spanish or Italian. I show that realistic child-directed speech has very few long-distance subject extractions in English, Spanish, and Italian, implying that learners must be able to distinguish these different hypotheses on the basis of other data. This is more consistent with more conservative approaches to these phenomena (Rizzi, 1982), which do not rely on abstract complementizer agreement like later analyses (Rizzi, 2006; Rizzi & Shlonsky, 2007). In Chapter 3, I show that resumptive pronoun dependencies inside islands in English are constructed in a non-active fashion, which contrasts with recent findings in Hebrew (Keshev & Meltzer-Asscher, ms). I propose that an expedient explanation of these facts is to suppose that resumptive pronouns in English are ungrammatical repair devices (Sells, 1984), whereas resumptive pronouns in island contexts are grammatical in Hebrew. This implies that learners must infer which analysis is appropriate for their grammars on the basis of some evidence in linguistic environment. However, a corpus study reveals that resumptive pronouns in islands are exceedingly rare in both languages, implying that this difference must be indirectly learned. I argue that theories of resumptive dependencies which analyze resumptive pronouns as incidences of the same abstract construction (e.g., Hayon 1973; Chomsky 1977) license this indirect learning, as long as resumptive dependencies in English are treated as ungrammatical repair mechanisms. In Chapter 4, I compare active dependency formation processes in Japanese
and Bangla. These findings suggest that filler-gap dependencies are preferentially resolved with the first position available. In Japanese, this is the most deeply embedded clause, since embedded clauses always precede the embedding verb (Aoshima et al., 2004; Yoshida, 2006; Omaki et al., 2014). Bangla allows a within-language comparison of the relationship between active dependency formation processes and word order, since embedded clauses may precede or follow the embedding verb (Bayer, 1996). However, the results from three experiments in Bangla are mixed, suggesting a weaker preference for a linearly local resolution of filler-gap dependencies, unlike in Japanese. I propose a number of possible explanations for these facts, and discuss how differences in processing profiles may be accounted for in a variety of ways.
In Chapter 5, I conclude the dissertation.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Every difference between languages is a “choice point” for the syntactician, psycholinguist, and language learner. The syntactician must describe the differences in representations that the grammars of different languages can assign. The psycholinguist must describe how the comprehension mechanisms search the space of the representations permitted by a grammar to quickly and effortlessly understand sentences in real time. The language learner must determine which representations are permitted in her grammar on the basis of her primary linguistic evidence. These investigations are largely pursued independently, and on the basis of qualitatively different data. In this dissertation, I show that these investigations can be pursued in a way that is mutually informative. Specifically, I show how learnability concerns and sentence processing data can constrain the space of possible analyses of language differences.
In Chapter 2, I argue that “indirect learning”, or abstract, cross-contruction syntactic inference, is necessary in order to explain how the learner determines which complementizers can co-occur with subjects gaps in her target grammar. I show that adult speakers largely converge in the robustness of the that-trace effect, a constraint on complementation complementizers and subject gaps observed in languages like English, but unobserved in languages like Spanish or Italian. I show that realistic child-directed speech has very few long-distance subject extractions in English, Spanish, and Italian, implying that learners must be able to distinguish these different hypotheses on the basis of other data. This is more consistent with more conservative approaches to these phenomena (Rizzi, 1982), which do not rely on abstract complementizer agreement like later analyses (Rizzi, 2006; Rizzi & Shlonsky, 2007). In Chapter 3, I show that resumptive pronoun dependencies inside islands in English are constructed in a non-active fashion, which contrasts with recent findings in Hebrew (Keshev & Meltzer-Asscher, ms). I propose that an expedient explanation of these facts is to suppose that resumptive pronouns in English are ungrammatical repair devices (Sells, 1984), whereas resumptive pronouns in island contexts are grammatical in Hebrew. This implies that learners must infer which analysis is appropriate for their grammars on the basis of some evidence in linguistic environment. However, a corpus study reveals that resumptive pronouns in islands are exceedingly rare in both languages, implying that this difference must be indirectly learned. I argue that theories of resumptive dependencies which analyze resumptive pronouns as incidences of the same abstract construction (e.g., Hayon 1973; Chomsky 1977) license this indirect learning, as long as resumptive dependencies in English are treated as ungrammatical repair mechanisms. In Chapter 4, I compare active dependency formation processes in Japanese
and Bangla. These findings suggest that filler-gap dependencies are preferentially resolved with the first position available. In Japanese, this is the most deeply embedded clause, since embedded clauses always precede the embedding verb (Aoshima et al., 2004; Yoshida, 2006; Omaki et al., 2014). Bangla allows a within-language comparison of the relationship between active dependency formation processes and word order, since embedded clauses may precede or follow the embedding verb (Bayer, 1996). However, the results from three experiments in Bangla are mixed, suggesting a weaker preference for a linearly local resolution of filler-gap dependencies, unlike in Japanese. I propose a number of possible explanations for these facts, and discuss how differences in processing profiles may be accounted for in a variety of ways.
In Chapter 5, I conclude the dissertation. |
Lago, Maria Sol: Memory and prediction in cross-linguistic sentence comprehension. University of Maryland, 2014. @phdthesis{Lago2014b,
title = {Memory and prediction in cross-linguistic sentence comprehension},
author = {Maria Sol Lago},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/lago2014.pdf, Memory and prediction in cross-linguistic sentence comprehension},
year = {2014},
date = {2014-08-01},
school = {University of Maryland},
abstract = {This dissertation explores the role of morphological and syntactic variation in sentence comprehension across languages. While most previous research has focused on how cross-linguistic differences affect the control structure of the language architecture (Lewis & Vasishth, 2005) here we adopt an explicit model of memory, content-addressable memory (Lewis & Vasishth, 2005; McElree, 2006) and examine how cross-linguistic variation affects the nature of the representations and processes that speakers deploy during comprehension. With this goal, we focus on two kinds of grammatical dependencies that involve an interaction between language and memory: subject-verb agreement and referential pronouns. In the first part of this dissertation, we use the self-paced reading method to examine how the processing of subject-verb agreement in Spanish, a language with a rich morphological system, differs from English. We show that differences in morphological richness across languages impact prediction processes while leaving retrieval processes fairly preserved. In the second part, we examine the processing of coreference in German, a language that, in contrast with English, encodes gender syntactically. We use eye-tracking to compare comprehension profiles during coreference and we find that only speakers of German show evidence of semantic reactivation of a pronoun’s antecedent. This suggests that retrieval of semantic information is dependent on syntactic gender, and demonstrates that German and English speakers retrieve qualitatively different antecedent representations from memory. Taken together, these results suggest that cross- linguistic variation in comprehension is more affected by the content than the functional importance of gender and number features across languages.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation explores the role of morphological and syntactic variation in sentence comprehension across languages. While most previous research has focused on how cross-linguistic differences affect the control structure of the language architecture (Lewis & Vasishth, 2005) here we adopt an explicit model of memory, content-addressable memory (Lewis & Vasishth, 2005; McElree, 2006) and examine how cross-linguistic variation affects the nature of the representations and processes that speakers deploy during comprehension. With this goal, we focus on two kinds of grammatical dependencies that involve an interaction between language and memory: subject-verb agreement and referential pronouns. In the first part of this dissertation, we use the self-paced reading method to examine how the processing of subject-verb agreement in Spanish, a language with a rich morphological system, differs from English. We show that differences in morphological richness across languages impact prediction processes while leaving retrieval processes fairly preserved. In the second part, we examine the processing of coreference in German, a language that, in contrast with English, encodes gender syntactically. We use eye-tracking to compare comprehension profiles during coreference and we find that only speakers of German show evidence of semantic reactivation of a pronoun’s antecedent. This suggests that retrieval of semantic information is dependent on syntactic gender, and demonstrates that German and English speakers retrieve qualitatively different antecedent representations from memory. Taken together, these results suggest that cross- linguistic variation in comprehension is more affected by the content than the functional importance of gender and number features across languages. |
Parker, Daniel: The cognitive basis for encoding and navigating linguistic structure. University of Maryland, 2014. @phdthesis{Parker2014b,
title = {The cognitive basis for encoding and navigating linguistic structure},
author = {Daniel Parker},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/parker2014.pdf, The cognitive basis for encoding and navigating linguistic structure},
year = {2014},
date = {2014-08-01},
school = {University of Maryland},
abstract = {This dissertation is concerned with the cognitive mechanisms that are used to encode and navigate linguistic structure. Successful language understanding requires mechanisms for efficiently encoding and navigating linguistic structure in memory. The timing and accuracy of linguistic dependency formation provides valuable insights into the cognitive basis of these mechanisms. Recent research on linguistic dependency formation has revealed a profile of selective fallibility: some linguistic dependencies are rapidly and accurately implemented, but others are not, giving rise to “linguistic illusions”. This profile is not expected under current models of grammar or language processing. The broad consensus, however, is that the profile of selective fallibility reflects dependency-based differences in memory access strategies, including the use of different retrieval mechanisms and the selective use of cues for different dependencies. In this dissertation, I argue that (i) the grain-size of variability is not dependency-type, and (ii) there is not a homogenous cause for linguistic illusions. Rather, I argue that the variability is a consequence of how the grammar interacts with general-purpose encoding and access mechanisms. To support this argument, I provide three types of evidence. First, I show how to “turn on” illusions for anaphor resolution, a phenomena that has resisted illusions in the past, reflecting a cue- combinatorics scheme that prioritizes structural information in memory retrieval. Second, I show how to “turn off” a robust illusion for negative polarity item (NPI) licensing, reflecting access to the internal computations during the encoding and interpretation of emerging semantic/pragmatic representations. Third, I provide computational simulations that derive both the presence and absence of the illusions from within the same memory architecture. These findings lead to a new conception of how we mentally encode and navigate structured linguistic representations.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation is concerned with the cognitive mechanisms that are used to encode and navigate linguistic structure. Successful language understanding requires mechanisms for efficiently encoding and navigating linguistic structure in memory. The timing and accuracy of linguistic dependency formation provides valuable insights into the cognitive basis of these mechanisms. Recent research on linguistic dependency formation has revealed a profile of selective fallibility: some linguistic dependencies are rapidly and accurately implemented, but others are not, giving rise to “linguistic illusions”. This profile is not expected under current models of grammar or language processing. The broad consensus, however, is that the profile of selective fallibility reflects dependency-based differences in memory access strategies, including the use of different retrieval mechanisms and the selective use of cues for different dependencies. In this dissertation, I argue that (i) the grain-size of variability is not dependency-type, and (ii) there is not a homogenous cause for linguistic illusions. Rather, I argue that the variability is a consequence of how the grammar interacts with general-purpose encoding and access mechanisms. To support this argument, I provide three types of evidence. First, I show how to “turn on” illusions for anaphor resolution, a phenomena that has resisted illusions in the past, reflecting a cue- combinatorics scheme that prioritizes structural information in memory retrieval. Second, I show how to “turn off” a robust illusion for negative polarity item (NPI) licensing, reflecting access to the internal computations during the encoding and interpretation of emerging semantic/pragmatic representations. Third, I provide computational simulations that derive both the presence and absence of the illusions from within the same memory architecture. These findings lead to a new conception of how we mentally encode and navigate structured linguistic representations. |
Chow, Wing Yee: The temporal dimension of linguistic prediction. University of Maryland, 2013. @phdthesis{Chow2013b,
title = {The temporal dimension of linguistic prediction},
author = {Wing Yee Chow},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/chow2013-diss.pdf, The temporal dimension of linguistic prediction},
year = {2013},
date = {2013-08-01},
school = {University of Maryland},
abstract = {This thesis explores how predictions about upcoming language inputs are computed during real-time language comprehension. Previous research has demonstrated humans’ ability to use rich contextual information to compute linguistic prediction during real-time language comprehension, and it has been widely assumed that contextual information can impact linguistic prediction as soon as it arises in the input. This thesis questions this key assumption and explores how linguistic predictions develop in real- time. I provide event-related potential (ERP) and reading eye-movement (EM) evidence from studies in Mandarin Chinese and English that even prominent and unambiguous information about preverbal arguments’ structural roles cannot immediately impact comprehenders’ verb prediction. I demonstrate that the N400, an ERP response that is modulated by a word’s predictability, becomes sensitive to argument role-reversals only when the time interval for prediction is widened. Further, I provide initial evidence that different sources of contextual information, namely, information about preverbal arguments’ lexical identity vs. their structural roles, may impact linguistic prediction on different time scales. I put forth a research framework that aims to characterize the mental computations underlying linguistic prediction along a temporal dimension.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This thesis explores how predictions about upcoming language inputs are computed during real-time language comprehension. Previous research has demonstrated humans’ ability to use rich contextual information to compute linguistic prediction during real-time language comprehension, and it has been widely assumed that contextual information can impact linguistic prediction as soon as it arises in the input. This thesis questions this key assumption and explores how linguistic predictions develop in real- time. I provide event-related potential (ERP) and reading eye-movement (EM) evidence from studies in Mandarin Chinese and English that even prominent and unambiguous information about preverbal arguments’ structural roles cannot immediately impact comprehenders’ verb prediction. I demonstrate that the N400, an ERP response that is modulated by a word’s predictability, becomes sensitive to argument role-reversals only when the time interval for prediction is widened. Further, I provide initial evidence that different sources of contextual information, namely, information about preverbal arguments’ lexical identity vs. their structural roles, may impact linguistic prediction on different time scales. I put forth a research framework that aims to characterize the mental computations underlying linguistic prediction along a temporal dimension. |
Kush, Dave: Respecting relations: memory access and antecedent retrieval in incremental sentence processing. University of Maryland, 2013. @phdthesis{Kush2013,
title = {Respecting relations: memory access and antecedent retrieval in incremental sentence processing},
author = {Dave Kush},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/kush2013.pdf, Respecting relations: memory access and antecedent retrieval in incremental sentence processing},
year = {2013},
date = {2013-08-01},
school = {University of Maryland},
abstract = {This dissertation uses the processing of anaphoric relations to probe how linguistic information is encoded in and retrieved from memory during real-time sentence com- prehension. More specifically, the dissertation attempts to resolve a tension between the demands of a linguistic processor implemented in a general-purpose cognitive architec- ture and the demands of abstract grammatical constraints that govern language use. The source of the tension is the role that abstract configurational relations (such as c-command, Reinhart 1983) play in constraining computations. Anaphoric dependencies are governed by formal grammatical constraints stated in terms of relations. For example, Binding Principle A (Chomsky 1981) requires that antecedents for local anaphors (like the English reciprocal each other) bear the c-command relation to those anaphors. In incremental sentence processing, antecedents of anaphors must be retrieved from memory. Recent research has motivated a model of processing that exploits a cue-based, associative retrieval process in content-addressable memory (e.g. Lewis, Vasishth & Van Dyke 2006) in which relations such as c-command are difficult to use as cues for retrieval. As such, the c-command constraints of formal grammars are predicted to be poorly implemented by the retrieval mechanism. I examine retrieval’s sensitivity to three constraints on anaphoric dependencies: Principle A (via Hindi local reciprocal licensing), the Scope Constraint on bound-variable pronoun licensing (often stated as a c-command constraint, though see Barker 2012), and Crossover constraints on pronominal binding (Postal 1971, Wasow 1972). The data suggest that retrieval exhibits fidelity to the constraints: structurally inaccessible NPs that match an anaphoric element in morphological features do not interfere with the retrieval of an antecedent in most cases considered. In spite of this alignment, I argue that retrieval’s apparent sensitivity to c-command constraints need not motivate a memory access procedure that makes direct reference to c-command relations. Instead, proxy features and general parsing operations conspire to mimic the extension of a system that respects c-command constraints. These strategies provide a robust approximation of grammatical performance while remaining within the confines of a independently- motivated general-purpose cognitive architecture.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation uses the processing of anaphoric relations to probe how linguistic information is encoded in and retrieved from memory during real-time sentence com- prehension. More specifically, the dissertation attempts to resolve a tension between the demands of a linguistic processor implemented in a general-purpose cognitive architec- ture and the demands of abstract grammatical constraints that govern language use. The source of the tension is the role that abstract configurational relations (such as c-command, Reinhart 1983) play in constraining computations. Anaphoric dependencies are governed by formal grammatical constraints stated in terms of relations. For example, Binding Principle A (Chomsky 1981) requires that antecedents for local anaphors (like the English reciprocal each other) bear the c-command relation to those anaphors. In incremental sentence processing, antecedents of anaphors must be retrieved from memory. Recent research has motivated a model of processing that exploits a cue-based, associative retrieval process in content-addressable memory (e.g. Lewis, Vasishth & Van Dyke 2006) in which relations such as c-command are difficult to use as cues for retrieval. As such, the c-command constraints of formal grammars are predicted to be poorly implemented by the retrieval mechanism. I examine retrieval’s sensitivity to three constraints on anaphoric dependencies: Principle A (via Hindi local reciprocal licensing), the Scope Constraint on bound-variable pronoun licensing (often stated as a c-command constraint, though see Barker 2012), and Crossover constraints on pronominal binding (Postal 1971, Wasow 1972). The data suggest that retrieval exhibits fidelity to the constraints: structurally inaccessible NPs that match an anaphoric element in morphological features do not interfere with the retrieval of an antecedent in most cases considered. In spite of this alignment, I argue that retrieval’s apparent sensitivity to c-command constraints need not motivate a memory access procedure that makes direct reference to c-command relations. Instead, proxy features and general parsing operations conspire to mimic the extension of a system that respects c-command constraints. These strategies provide a robust approximation of grammatical performance while remaining within the confines of a independently- motivated general-purpose cognitive architecture. |
Lewis, Shevaun: Pragmatic enrichment in language processing and development. University of Maryland, 2013. @phdthesis{Lewis2013,
title = {Pragmatic enrichment in language processing and development},
author = {Shevaun Lewis},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/lewis2013.pdf, Pragmatic enrichment in language processing and development},
year = {2013},
date = {2013-08-01},
school = {University of Maryland},
abstract = {The goal of language comprehension for humans is not just to decode the semantic content of sentences, but rather to grasp what speakers intend to communicate. To infer speaker meaning, listeners must at minimum assess whether and how the literal meaning of an utterance addresses a question under discussion in the conversation. In cases of implicature, where the speaker intends to communicate more than just the literal meaning, listeners must access additional relevant information in order to understand the intended contribution of the utterance. I argue that the primary challenge for inferring speaker meaning is in identifying and accessing this relevant contextual information. In this dissertation, I integrate evidence from several different types of implicature to argue that both adults and children are able to execute complex pragmatic inferences relatively efficiently, but encounter some difficulty finding what is relevant in context. I argue that the variability observed in processing costs associated with adults’ computation of scalar implicatures can be better understood by examining how the critical contextual information is presented in the discourse context. I show that children’s oft-cited hyper-literal interpretation style is limited to scalar quantifiers. Even 3-year-olds are adept at understanding indirect requests and “parenthetical” readings of belief reports. Their ability to infer speaker meanings is limited only by their relative inexperience in conversation and lack of world knowledge.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
The goal of language comprehension for humans is not just to decode the semantic content of sentences, but rather to grasp what speakers intend to communicate. To infer speaker meaning, listeners must at minimum assess whether and how the literal meaning of an utterance addresses a question under discussion in the conversation. In cases of implicature, where the speaker intends to communicate more than just the literal meaning, listeners must access additional relevant information in order to understand the intended contribution of the utterance. I argue that the primary challenge for inferring speaker meaning is in identifying and accessing this relevant contextual information. In this dissertation, I integrate evidence from several different types of implicature to argue that both adults and children are able to execute complex pragmatic inferences relatively efficiently, but encounter some difficulty finding what is relevant in context. I argue that the variability observed in processing costs associated with adults’ computation of scalar implicatures can be better understood by examining how the critical contextual information is presented in the discourse context. I show that children’s oft-cited hyper-literal interpretation style is limited to scalar quantifiers. Even 3-year-olds are adept at understanding indirect requests and “parenthetical” readings of belief reports. Their ability to infer speaker meanings is limited only by their relative inexperience in conversation and lack of world knowledge. |
Dillon, Brian: Structured access in sentence comprehension. University of Maryland, 2011. @phdthesis{Dillon2011,
title = {Structured access in sentence comprehension},
author = {Brian Dillon},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/dillon2011.pdf, Structured access in sentence comprehension},
year = {2011},
date = {2011-08-01},
school = {University of Maryland},
abstract = {This thesis is concerned with the nature of memory access during the construction of long-distance dependencies in online sentence comprehension. In recent years, an intense focus on the computational challenges posed by long-distance dependencies has proven to be illuminating with respect to the characteristics of the architecture of the human sentence processor, suggesting a tight link between general memory access procedures and sentence processing routines (Lewis & Vasishth 2005; Lewis, Vasishth, & Van Dyke 2006; Wagers, Lau & Phillips 2009). The present thesis builds upon this line of research, and its primary aim is to motivate and defend the hypothesis that the parser accesses linguistic memory in an essentially structured fashion for certain long-distance dependencies. In order to make this case, I focus on the processing of reflexive and agreement dependencies, and ask whether or not non- structural information such as morphological features are used to gate memory access during syntactic comprehension. Evidence from eight experiments in a range of methodologies in English and Chinese is brought to bear on this question, providing arguments from interference effects and time-course effects that primarily syntactic information is used to access linguistic memory in the construction of certain long- distance dependencies. The experimental evidence for structured access is compatible with a variety of architectural assumptions about the parser, and I present one implementation of this idea in a parser based on the ACT-R memory architecture. In the context of such a content-addressable model of memory, the claim of structured access is equivalent to the claim that only syntactic cues are used to query memory. I argue that structured access reflects an optimal parsing strategy in the context of a noisy, interference-prone cognitive architecture: abstract structural cues are favored over lexical feature cues for certain structural dependencies in order to minimize memory interference in online processing.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This thesis is concerned with the nature of memory access during the construction of long-distance dependencies in online sentence comprehension. In recent years, an intense focus on the computational challenges posed by long-distance dependencies has proven to be illuminating with respect to the characteristics of the architecture of the human sentence processor, suggesting a tight link between general memory access procedures and sentence processing routines (Lewis & Vasishth 2005; Lewis, Vasishth, & Van Dyke 2006; Wagers, Lau & Phillips 2009). The present thesis builds upon this line of research, and its primary aim is to motivate and defend the hypothesis that the parser accesses linguistic memory in an essentially structured fashion for certain long-distance dependencies. In order to make this case, I focus on the processing of reflexive and agreement dependencies, and ask whether or not non- structural information such as morphological features are used to gate memory access during syntactic comprehension. Evidence from eight experiments in a range of methodologies in English and Chinese is brought to bear on this question, providing arguments from interference effects and time-course effects that primarily syntactic information is used to access linguistic memory in the construction of certain long- distance dependencies. The experimental evidence for structured access is compatible with a variety of architectural assumptions about the parser, and I present one implementation of this idea in a parser based on the ACT-R memory architecture. In the context of such a content-addressable model of memory, the claim of structured access is equivalent to the claim that only syntactic cues are used to query memory. I argue that structured access reflects an optimal parsing strategy in the context of a noisy, interference-prone cognitive architecture: abstract structural cues are favored over lexical feature cues for certain structural dependencies in order to minimize memory interference in online processing. |
Omaki, Akira: Commitment and flexibility in the developing parser. University of Maryland, 2010. @phdthesis{Omaki2010,
title = {Commitment and flexibility in the developing parser},
author = {Akira Omaki},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/omaki2010-diss.pdf, Commitment and flexibility in the developing parser},
year = {2010},
date = {2010-08-01},
school = {University of Maryland},
abstract = {This dissertation investigates adults and children’s sentence processing mechanisms, with a special focus on how multiple levels of linguistic representation are incrementally computed in real time, and how this process affects the parser’s ability to later revise its early commitments. Using cross-methodological and cross-linguistic investigations of long-distance dependency processing, this dissertation demonstrates how paying explicit attention to the procedures by which linguistic representations are computed is vital to understanding both adults’ real time linguistic computation and children’s reanalysis mechanisms. The first part of the dissertation uses time course evidence from self-paced reading and eye tracking studies (reading and visual world) to show that long-distance dependency processing can be decomposed into a sequence of syntactic and interpretive processes. First, the reading experiments provide evidence that suggests that filler-gap dependencies are constructed before verb information is accessed. Second, visual world experiments show that, in the absence of information that would allow hearers to predict verb content in advance, interpretive processes in filler-gap dependency computation take around 600ms. These results argue for a predictive model of sentence interpretation in which syntactic representations are computed in advance of interpretive processes. The second part of the dissertation capitalizes on this procedural account of filler- gap dependency processing, and reports cross-linguistic studies on children’s long- distance dependency processing. Interpretation data from English and Japanese demonstrate that children actively associate a fronted wh-phrase with the first VP in the sentence, and successfully retract such active syntactic commitments when the lack of felicitous interpretation is signaled by verb information, but not when it is signaled by syntactic information. A comparison of the process of anaphor reconstruction in adults and children further suggests that verb-based thematic information is an effective revision cue for children. Finally, distributional analyses of wh-dependencies in child-directed speech are conducted to investigate how parsing constraints impact language acquisition. It is shown that the actual properties of the child parser can skew the input distribution, such that the effective distribution differs drastically from the input distribution seen from a researcher’s perspective. This suggests that properties of developing perceptual mechanisms deserve more attention in language acquisition research.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation investigates adults and children’s sentence processing mechanisms, with a special focus on how multiple levels of linguistic representation are incrementally computed in real time, and how this process affects the parser’s ability to later revise its early commitments. Using cross-methodological and cross-linguistic investigations of long-distance dependency processing, this dissertation demonstrates how paying explicit attention to the procedures by which linguistic representations are computed is vital to understanding both adults’ real time linguistic computation and children’s reanalysis mechanisms. The first part of the dissertation uses time course evidence from self-paced reading and eye tracking studies (reading and visual world) to show that long-distance dependency processing can be decomposed into a sequence of syntactic and interpretive processes. First, the reading experiments provide evidence that suggests that filler-gap dependencies are constructed before verb information is accessed. Second, visual world experiments show that, in the absence of information that would allow hearers to predict verb content in advance, interpretive processes in filler-gap dependency computation take around 600ms. These results argue for a predictive model of sentence interpretation in which syntactic representations are computed in advance of interpretive processes. The second part of the dissertation capitalizes on this procedural account of filler- gap dependency processing, and reports cross-linguistic studies on children’s long- distance dependency processing. Interpretation data from English and Japanese demonstrate that children actively associate a fronted wh-phrase with the first VP in the sentence, and successfully retract such active syntactic commitments when the lack of felicitous interpretation is signaled by verb information, but not when it is signaled by syntactic information. A comparison of the process of anaphor reconstruction in adults and children further suggests that verb-based thematic information is an effective revision cue for children. Finally, distributional analyses of wh-dependencies in child-directed speech are conducted to investigate how parsing constraints impact language acquisition. It is shown that the actual properties of the child parser can skew the input distribution, such that the effective distribution differs drastically from the input distribution seen from a researcher’s perspective. This suggests that properties of developing perceptual mechanisms deserve more attention in language acquisition research. |
Lau, Ellen: The predictive nature of language comprehension. University of Maryland, 2009. @phdthesis{Lau2009,
title = {The predictive nature of language comprehension},
author = {Ellen Lau},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/lau2009.pdf, The predictive nature of language comprehension},
year = {2009},
date = {2009-08-01},
school = {University of Maryland},
abstract = {This dissertation explores the hypothesis that predictive processing—the access and construction of internal representations in advance of the external input that supports them—plays a central role in language comprehension. Linguistic input is frequently noisy, variable, and rapid, but it is also subject to numerous constraints. Predictive processing could be a particularly useful approach in language comprehension, as predictions based on the constraints imposed by the prior context could allow computation to be speeded and noisy input to be disambiguated. Decades of previous research have demonstrated that the broader sentence context has an effect on how new input is processed, but less progress has been made in determining the mechanisms underlying such contextual effects. This dissertation is aimed at advancing this second goal, by using both behavioral and neurophysiological methods to motivate predictive or top-down interpretations of contextual effects and to test particular hypotheses about the nature of the predictive mechanisms in question. The first part of the dissertation focuses on the lexical-semantic predictions made possible by word and sentence contexts. MEG and fMRI experiments, in conjunction with a meta-analysis of the previous neuroimaging literature, support the claim that an ERP effect classically observed in response to contextual manipulations—the N400 effect—reflects facilitation in processing due to lexical- semantic predictions, and that these predictions are realized at least in part through top-down changes in activity in left posterior middle temporal cortex, the cortical region thought to represent lexical-semantic information in long-term memory,. The second part of the dissertation focuses on syntactic predictions. ERP and reaction time data suggest that the syntactic requirements of the prior context impacts processing of the current input very early, and that predicting the syntactic position in which the requirements can be fulfilled may allow the processor to avoid a retrieval mechanism that is prone to similarity-based interference errors. In sum, the results described here are consistent with the hypothesis that a significant amount of language comprehension takes place in advance of the external input, and suggest future avenues of investigation towards understanding the mechanisms that make this possible.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation explores the hypothesis that predictive processing—the access and construction of internal representations in advance of the external input that supports them—plays a central role in language comprehension. Linguistic input is frequently noisy, variable, and rapid, but it is also subject to numerous constraints. Predictive processing could be a particularly useful approach in language comprehension, as predictions based on the constraints imposed by the prior context could allow computation to be speeded and noisy input to be disambiguated. Decades of previous research have demonstrated that the broader sentence context has an effect on how new input is processed, but less progress has been made in determining the mechanisms underlying such contextual effects. This dissertation is aimed at advancing this second goal, by using both behavioral and neurophysiological methods to motivate predictive or top-down interpretations of contextual effects and to test particular hypotheses about the nature of the predictive mechanisms in question. The first part of the dissertation focuses on the lexical-semantic predictions made possible by word and sentence contexts. MEG and fMRI experiments, in conjunction with a meta-analysis of the previous neuroimaging literature, support the claim that an ERP effect classically observed in response to contextual manipulations—the N400 effect—reflects facilitation in processing due to lexical- semantic predictions, and that these predictions are realized at least in part through top-down changes in activity in left posterior middle temporal cortex, the cortical region thought to represent lexical-semantic information in long-term memory,. The second part of the dissertation focuses on syntactic predictions. ERP and reaction time data suggest that the syntactic requirements of the prior context impacts processing of the current input very early, and that predicting the syntactic position in which the requirements can be fulfilled may allow the processor to avoid a retrieval mechanism that is prone to similarity-based interference errors. In sum, the results described here are consistent with the hypothesis that a significant amount of language comprehension takes place in advance of the external input, and suggest future avenues of investigation towards understanding the mechanisms that make this possible. |
Stroud, Clare: Structural and semantic selectivity in the electrophysiology of sentence comprehension. University of Maryland, 2008. @phdthesis{Stroud2008,
title = {Structural and semantic selectivity in the electrophysiology of sentence comprehension},
author = {Clare Stroud},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/stroud2008.pdf,
Structural and semantic selectivity in the electrophysiology of sentence comprehension},
year = {2008},
date = {2008-12-01},
school = {University of Maryland},
abstract = {This dissertation is concerned with whether the sentence processor can compute plausible relations among a cluster of neighboring open class words without taking into account the relationships between these words as dictated by the structure of the sentence. It has been widely assumed that compositional semantics is built on top of syntactic structures (Heim & Kratzer, 1998; Pollard & Sag, 1994). This view has been challenged by recent electrophysiological findings (Kim and Osterhout, 2005; Kuperberg, 2007; van Herten et al., 2005, 2006) that appear to show that semantic composition can proceed independently of syntactic structure. This dissertation investigates whether the evidence for independent semantic composition is as strong and widespread as has been previously claimed. Recent studies have shown that sentences containing a semantically anomalous interpretation but an unambiguous, grammatical structure (e.g., The meal was devouring...) elicit a P600 response, the component classically elicited by syntactic anomalies, rather than an N400, the component typically elicited by semantic anomalies (Kim and Osterhout, 2005). This has been interpreted as evidence that the processor analyzed meal as a good theme for devour, even though this interpretation is not supported by the sentential structure. This led to the claim that semantic composition can proceed independently of syntactic structure. Two event-related potentials (ERP) studies investigated whether the processor exploits prior structural biases and commitments to restrict semantic interpretations to those that are compatible with that expected structure. A further ERP study and a review of relevant studies reveal that in the majority of studies the P600 is not modulated by manipulations of thematic fit or semantic association between the open class words. We argue that a large number of studies that have been taken as evidence for an independent semantic processing stream can be explained as violations of the verb’s requirement that its subject be agentive. A small number of studies in verb- final languages cannot be explained in this way, and may be evidence of independent semantic composition, although further experimental work is needed. We conclude that the evidence for independent semantic composition is not as extensive as was previously thought.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation is concerned with whether the sentence processor can compute plausible relations among a cluster of neighboring open class words without taking into account the relationships between these words as dictated by the structure of the sentence. It has been widely assumed that compositional semantics is built on top of syntactic structures (Heim & Kratzer, 1998; Pollard & Sag, 1994). This view has been challenged by recent electrophysiological findings (Kim and Osterhout, 2005; Kuperberg, 2007; van Herten et al., 2005, 2006) that appear to show that semantic composition can proceed independently of syntactic structure. This dissertation investigates whether the evidence for independent semantic composition is as strong and widespread as has been previously claimed. Recent studies have shown that sentences containing a semantically anomalous interpretation but an unambiguous, grammatical structure (e.g., The meal was devouring...) elicit a P600 response, the component classically elicited by syntactic anomalies, rather than an N400, the component typically elicited by semantic anomalies (Kim and Osterhout, 2005). This has been interpreted as evidence that the processor analyzed meal as a good theme for devour, even though this interpretation is not supported by the sentential structure. This led to the claim that semantic composition can proceed independently of syntactic structure. Two event-related potentials (ERP) studies investigated whether the processor exploits prior structural biases and commitments to restrict semantic interpretations to those that are compatible with that expected structure. A further ERP study and a review of relevant studies reveal that in the majority of studies the P600 is not modulated by manipulations of thematic fit or semantic association between the open class words. We argue that a large number of studies that have been taken as evidence for an independent semantic processing stream can be explained as violations of the verb’s requirement that its subject be agentive. A small number of studies in verb- final languages cannot be explained in this way, and may be evidence of independent semantic composition, although further experimental work is needed. We conclude that the evidence for independent semantic composition is not as extensive as was previously thought. |
Wagers, Matthew: The structure of memory meets memory for structure in linguistic cognition. University of Maryland, 2008. @phdthesis{Wagers2008,
title = {The structure of memory meets memory for structure in linguistic cognition},
author = {Matthew Wagers},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/wagers2008.pdf, The structure of memory meets memory for structure in linguistic cognition},
year = {2008},
date = {2008-08-01},
school = {University of Maryland},
abstract = {This dissertation is concerned with the problem of how structured linguistic representations interact with the architecture of human memory. Much recent work has attempted to unify real-time linguistic memory with a general content-addressable architecture (Lewis & Vasishth, 2005; McElree, 2006). Because grammatical principles and constraints are strongly relational in nature, and linguistic representation hierarchical, this kind of architecture is not well suited to restricting the search of memory to grammatically-licensed constituents alone. This dissertation investigates under what conditions real-time language comprehension is grammatically accurate. Two kinds of grammatical dependencies were examined in reading time and speeded grammaticality experiments: subject-verb agreement licensing in agreement attraction configurations (“The runners who the driver wave to ...”; Kimball & Aissen, 1971, Bock & Miller, 1991), and active completion of wh-dependencies. We develop a simple formal model of agreement attraction in an associative memory that makes accurate predictions across different structures. We conclude that dependencies that can only be licensed exclusively retrospectively, by searching the memory to generate candidate analyses, are the most prone to grammatical infidelity. The exception may be retrospective searches with especially strong contextual restrictions, as in reflexive anaphora. However dependencies that can be licensed principally by a prospective search, like wh-dependencies or backwards anaphora, are highly grammatically accurate.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation is concerned with the problem of how structured linguistic representations interact with the architecture of human memory. Much recent work has attempted to unify real-time linguistic memory with a general content-addressable architecture (Lewis & Vasishth, 2005; McElree, 2006). Because grammatical principles and constraints are strongly relational in nature, and linguistic representation hierarchical, this kind of architecture is not well suited to restricting the search of memory to grammatically-licensed constituents alone. This dissertation investigates under what conditions real-time language comprehension is grammatically accurate. Two kinds of grammatical dependencies were examined in reading time and speeded grammaticality experiments: subject-verb agreement licensing in agreement attraction configurations (“The runners who the driver wave to ...”; Kimball & Aissen, 1971, Bock & Miller, 1991), and active completion of wh-dependencies. We develop a simple formal model of agreement attraction in an associative memory that makes accurate predictions across different structures. We conclude that dependencies that can only be licensed exclusively retrospectively, by searching the memory to generate candidate analyses, are the most prone to grammatical infidelity. The exception may be retrospective searches with especially strong contextual restrictions, as in reflexive anaphora. However dependencies that can be licensed principally by a prospective search, like wh-dependencies or backwards anaphora, are highly grammatically accurate. |
Goro, Takuya: Language specific constraints on scope interpretation in first language acquisition. University of Maryland, 2007. @phdthesis{Goro2007,
title = {Language specific constraints on scope interpretation in first language acquisition},
author = {Takuya Goro},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/goro2007.pdf, Language specific constraints on scope interpretation in first language acquisition},
year = {2007},
date = {2007-08-01},
school = {University of Maryland},
abstract = {This dissertation investigates the acquisition of language-specific constraints on scope interpretation by Japanese preschool children. Several constructions in Japanese do not allow scope interpretations that the corresponding English sentences do allow. First, in Japanese transitive sentences with multiple quantificational arguments, an inverse scope interpretation is disallowed, due to the Rigid Scope Constraint. Second, Japanese logical connectives cannot be interpreted under the scope of local negation, due to their Positive Polarity. Thirdly, in Japanese infinitival complement constructions with implicative matrix verbs like wasureru (“forget”) the inverse scope interpretation is required, due to the Anti-Reconstruction Constraint. The main goal of this research is to determine how Japanese children learn these constraints on scope interpretations. To that end, three properties of the acquisition task that have an influence on the learnability of linguistic knowledge are examined: productivity, no negative evidence, and arbitrariness. The results of experimental investigations show that Japanese children productively generate scope interpretations that are never exemplified in the input. For example, with sentences that contain two quantificational arguments, Japanese children accessed inverse scope interpretations that Japanese adults do not allow. Also, Japanese children interpret the disjunction ka under the scope of local negation, which is not a possible interpretive option in the adult language. These findings clearly show that children do not acquire these scope constraints through conservative learning, and raise the question of how they learn to purge their non-adult interpretations. It is argued that input data do not provide learners with negative evidence (direct or indirect) against particular scope interpretations. Two inherent properties of input data about possible scope interpretations, data sparseness and indirectness, make negative evidence too unreliable as a basis for discovering what scope interpretation is impossible. In order to solve the learnability problems that children’s scope productivity raise, I suggest that the impossibility of their non-adult interpretations are acquired by learning some independently observable properties of the language. In other words, the scope constraints are not arbitrary in the sense that their effects are consequences of other properties of the grammar of Japanese.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation investigates the acquisition of language-specific constraints on scope interpretation by Japanese preschool children. Several constructions in Japanese do not allow scope interpretations that the corresponding English sentences do allow. First, in Japanese transitive sentences with multiple quantificational arguments, an inverse scope interpretation is disallowed, due to the Rigid Scope Constraint. Second, Japanese logical connectives cannot be interpreted under the scope of local negation, due to their Positive Polarity. Thirdly, in Japanese infinitival complement constructions with implicative matrix verbs like wasureru (“forget”) the inverse scope interpretation is required, due to the Anti-Reconstruction Constraint. The main goal of this research is to determine how Japanese children learn these constraints on scope interpretations. To that end, three properties of the acquisition task that have an influence on the learnability of linguistic knowledge are examined: productivity, no negative evidence, and arbitrariness. The results of experimental investigations show that Japanese children productively generate scope interpretations that are never exemplified in the input. For example, with sentences that contain two quantificational arguments, Japanese children accessed inverse scope interpretations that Japanese adults do not allow. Also, Japanese children interpret the disjunction ka under the scope of local negation, which is not a possible interpretive option in the adult language. These findings clearly show that children do not acquire these scope constraints through conservative learning, and raise the question of how they learn to purge their non-adult interpretations. It is argued that input data do not provide learners with negative evidence (direct or indirect) against particular scope interpretations. Two inherent properties of input data about possible scope interpretations, data sparseness and indirectness, make negative evidence too unreliable as a basis for discovering what scope interpretation is impossible. In order to solve the learnability problems that children’s scope productivity raise, I suggest that the impossibility of their non-adult interpretations are acquired by learning some independently observable properties of the language. In other words, the scope constraints are not arbitrary in the sense that their effects are consequences of other properties of the grammar of Japanese. |
Pablos, Leticia: Preverbal structure building in Romance languages and Basque. University of Maryland, 2006. @phdthesis{Pablos2006,
title = {Preverbal structure building in Romance languages and Basque},
author = {Leticia Pablos},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/pablos2006.pdf, Preverbal structure building in Romance languages and Basque},
year = {2006},
date = {2006-08-01},
school = {University of Maryland},
abstract = {The main goal of the work in this dissertation is to investigate pre-verbal structure building effects in languages with different configurations such as Spanish, Galician and Basque, by means of using different pre-verbal cues in order to show that syntactic structure can be both interpreted and anticipated before the verbal head. I examine the syntax of Clitic-Left Dislocations (CLLDs) and other kinds of left-dislocations in Spanish and then analyze their processing. I concentrate on the use of clitic pronouns in Spanish and Galician in CLLD constructions that require the presence of the clitic pronoun to interpret the left-dislocated phrase and where I examine if this left-dislocation is interpreted at the clitic pronoun. Experimental results from three self-paced reading experiments provide evidence that the clitic in these constructions is required and used to interpret the thematic features of the topicalized NP before the verb. Thus, I demonstrate that clitic pronouns are used as pre-verbal cues in parsing and that the active search mechanism is also triggered in long-distance dependencies involving clitic pronouns. I conclude that the active search mechanism is a more general architectural mechanism of the parser that is triggered in all kinds of long-distance dependencies, regardless of whether the search is triggered by gaps or pronouns. In Basque, verbal auxiliaries overtly encode agreement information that reflects the number of arguments of the verbal head. In negatives, auxiliaries are obligatorily fronted and split from the verbal head with which they otherwise form a cluster. Thus, verbal auxiliaries in Basque are a pre-verbal morphological cue that can assist the parser in predicting structure. Specifically, I examine how predictions for the upcoming structure of the sentence are determined by agreement information on the number of arguments specified in the auxiliary and by the mismatch of this auxiliary with the case features of the NP that follows it. I provide results from a self- paced reading experiment to argue that the parser uses the information encoded in the auxiliaries and demonstrate that the mismatch of the auxiliary with the following NP can prevent the reader from following a garden-path analysis of the sentence.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
The main goal of the work in this dissertation is to investigate pre-verbal structure building effects in languages with different configurations such as Spanish, Galician and Basque, by means of using different pre-verbal cues in order to show that syntactic structure can be both interpreted and anticipated before the verbal head. I examine the syntax of Clitic-Left Dislocations (CLLDs) and other kinds of left-dislocations in Spanish and then analyze their processing. I concentrate on the use of clitic pronouns in Spanish and Galician in CLLD constructions that require the presence of the clitic pronoun to interpret the left-dislocated phrase and where I examine if this left-dislocation is interpreted at the clitic pronoun. Experimental results from three self-paced reading experiments provide evidence that the clitic in these constructions is required and used to interpret the thematic features of the topicalized NP before the verb. Thus, I demonstrate that clitic pronouns are used as pre-verbal cues in parsing and that the active search mechanism is also triggered in long-distance dependencies involving clitic pronouns. I conclude that the active search mechanism is a more general architectural mechanism of the parser that is triggered in all kinds of long-distance dependencies, regardless of whether the search is triggered by gaps or pronouns. In Basque, verbal auxiliaries overtly encode agreement information that reflects the number of arguments of the verbal head. In negatives, auxiliaries are obligatorily fronted and split from the verbal head with which they otherwise form a cluster. Thus, verbal auxiliaries in Basque are a pre-verbal morphological cue that can assist the parser in predicting structure. Specifically, I examine how predictions for the upcoming structure of the sentence are determined by agreement information on the number of arguments specified in the auxiliary and by the mismatch of this auxiliary with the case features of the NP that follows it. I provide results from a self- paced reading experiment to argue that the parser uses the information encoded in the auxiliaries and demonstrate that the mismatch of the auxiliary with the following NP can prevent the reader from following a garden-path analysis of the sentence. |
Yoshida, Masaya: Constraints and mechanisms in long-distance dependency formation. University of Maryland, 2006. @phdthesis{Yoshida2006,
title = {Constraints and mechanisms in long-distance dependency formation},
author = {Masaya Yoshida},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/yoshida2006.pdf, Constraints and mechanisms in long-distance dependency formation},
year = {2006},
date = {2006-08-01},
school = {University of Maryland},
abstract = {This thesis aims to reveal the mechanisms and constraints involving in long-distance dependency formation in the static knowledge of language and in real-time sentence processing. Special attention is paid to the grammar and processing of island constraints. Several experiments show that in a head-final language like Japanese global constraints like island constraints are applied long before decisive information such as verb heads and relative heads, are encountered. Based on this observation, the thesis argues that there is a powerful predictive mechanism at work behind real time sentence processing. A model of this predictive mechanism is proposed. This thesis examines the nature of several island constraints, specifically Complex NP Islands induced by relative clauses, and clausal adjunct islands. It is argued that in the majority of languages, both relative clauses and adjunct clauses are islands, but there is a small subset of languages (including Japanese, Korean and Malayalam) where extraction out of adjunct clauses seems to be allowed. Applying well-established syntactic tests to the necessary constructions in Japanese, it is established that dependencies crossing adjunct clauses are indeed created by movement operations, and still the extraction is allowed from adjuncts. Building on previous findings, the thesis turns to the investigation of the interaction between real time sentence processing and island constraints. Looking specifically at Japanese, a head-final language this thesis ask how the structure of sentences are built and what constraints are applied to the structure building process. A series of experiments shows that in Japanese, even before high-information bearing units such as verbs, relative heads or adjunct markers are encountered, the structural skeleton is built, and such pre-computed structures are highly articulated. It is shown that structural constraints on long-distance dependencies are imposed on the pre-built structure. It is further shown that this finding support the incrementality of sentence processing.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This thesis aims to reveal the mechanisms and constraints involving in long-distance dependency formation in the static knowledge of language and in real-time sentence processing. Special attention is paid to the grammar and processing of island constraints. Several experiments show that in a head-final language like Japanese global constraints like island constraints are applied long before decisive information such as verb heads and relative heads, are encountered. Based on this observation, the thesis argues that there is a powerful predictive mechanism at work behind real time sentence processing. A model of this predictive mechanism is proposed. This thesis examines the nature of several island constraints, specifically Complex NP Islands induced by relative clauses, and clausal adjunct islands. It is argued that in the majority of languages, both relative clauses and adjunct clauses are islands, but there is a small subset of languages (including Japanese, Korean and Malayalam) where extraction out of adjunct clauses seems to be allowed. Applying well-established syntactic tests to the necessary constructions in Japanese, it is established that dependencies crossing adjunct clauses are indeed created by movement operations, and still the extraction is allowed from adjuncts. Building on previous findings, the thesis turns to the investigation of the interaction between real time sentence processing and island constraints. Looking specifically at Japanese, a head-final language this thesis ask how the structure of sentences are built and what constraints are applied to the structure building process. A series of experiments shows that in Japanese, even before high-information bearing units such as verbs, relative heads or adjunct markers are encountered, the structural skeleton is built, and such pre-computed structures are highly articulated. It is shown that structural constraints on long-distance dependencies are imposed on the pre-built structure. It is further shown that this finding support the incrementality of sentence processing. |
Kazanina, Nina: The acquisition and processing of backwards anaphora. University of Maryland, 2005. @phdthesis{Kazanina2005,
title = {The acquisition and processing of backwards anaphora},
author = {Nina Kazanina},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/kazanina2005.pdf, The acquisition and processing of backwards anaphora},
year = {2005},
date = {2005-08-01},
school = {University of Maryland},
abstract = {This dissertation investigates long-distance backwards pronominal dependencies (backwards anaphora or cataphora) and constraints on such dependencies from the viewpoint of language development and real-time language processing. Based on the findings from a comprehension experiment with Russian-speaking children and on real-time sentence processing data from English and Russian adults I argue for a position that distinguishes structural and non-structural constraints on backwards anaphora. I show that unlike their non-syntactic counterparts, structural constraints on coreference, in particular Principle C of the Binding Theory (Chomsky 1981), are active at the earliest stage of language development and of real-time processing. In language acquisition, the results of a truth-value judgment task with 3-6 year old Russian-speaking children reveal a striking developmental asymmetry between Principle C, a cross-linguistically consistent syntactic constraint on coreference, and a Russian-specific discourse constraint on coreference. Whereas Principle C is respected by children already at the age of three, the Russian- specific (discourse) constraint is not operative in child language until the age of five. These findings present a challenge for input-driven accounts of language acquisition and are most naturally explained in theories that admit the existence of innately specified principles that underlie linguistic representations. In real-time processing, the findings from a series of self-paced reading experiments on English and Russian show that in backwards anaphora contexts the parser initiates an active search for an antecedent for the pronoun which is limited to positions that are not subject to structural constraints on coreference, e.g. Principle C. This grammatically constrained active search mechanism associated observed in the processing of backwards anaphora is similar to the mechanism found in the processing of another type of a long-distance dependency, the wh-dependency. I suggest that the early application of structural constraints on long-distance dependencies is due to reasons of parsing efficiency rather than due to their architectural priority, as such constraints aid to restrict the search space of possible representations to be built by the parser. A computational parsing algorithm is developed that combines the constrained active search mechanism with a strict incremental left-to-right structure building procedure.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
This dissertation investigates long-distance backwards pronominal dependencies (backwards anaphora or cataphora) and constraints on such dependencies from the viewpoint of language development and real-time language processing. Based on the findings from a comprehension experiment with Russian-speaking children and on real-time sentence processing data from English and Russian adults I argue for a position that distinguishes structural and non-structural constraints on backwards anaphora. I show that unlike their non-syntactic counterparts, structural constraints on coreference, in particular Principle C of the Binding Theory (Chomsky 1981), are active at the earliest stage of language development and of real-time processing. In language acquisition, the results of a truth-value judgment task with 3-6 year old Russian-speaking children reveal a striking developmental asymmetry between Principle C, a cross-linguistically consistent syntactic constraint on coreference, and a Russian-specific discourse constraint on coreference. Whereas Principle C is respected by children already at the age of three, the Russian- specific (discourse) constraint is not operative in child language until the age of five. These findings present a challenge for input-driven accounts of language acquisition and are most naturally explained in theories that admit the existence of innately specified principles that underlie linguistic representations. In real-time processing, the findings from a series of self-paced reading experiments on English and Russian show that in backwards anaphora contexts the parser initiates an active search for an antecedent for the pronoun which is limited to positions that are not subject to structural constraints on coreference, e.g. Principle C. This grammatically constrained active search mechanism associated observed in the processing of backwards anaphora is similar to the mechanism found in the processing of another type of a long-distance dependency, the wh-dependency. I suggest that the early application of structural constraints on long-distance dependencies is due to reasons of parsing efficiency rather than due to their architectural priority, as such constraints aid to restrict the search space of possible representations to be built by the parser. A computational parsing algorithm is developed that combines the constrained active search mechanism with a strict incremental left-to-right structure building procedure. |
Aoshima, Sachiko: The grammar and parsing of wh-dependencies. University of Maryland, 2003. @phdthesis{Aoshima2003,
title = {The grammar and parsing of wh-dependencies},
author = {Sachiko Aoshima},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/aoshima2003.pdf, The grammar and parsing of wh-dependencies},
year = {2003},
date = {2003-08-01},
school = {University of Maryland},
abstract = {The aim of this thesis is to explain how grammar relates to real-time comprehension of wh-dependencies. It proposes an explanatory model which makes minimal assumptions to derive the facts seen in both performance and comprehension. The assumptions are twofold. One is that grammatical constraints drive learning and parsing theory. The other is that incrementality and efficiency force left corner analysis. The proposed model can explain the following, heretofore unrelated, facts; (i) the time course of gap-creation for filler-gap dependencies suggests incremental gap-positing, (ii) structure-building is incremental in head-final structures, (iii) the insertion principle forced by the left corner constraint derives unforced reanalysis only for predicted categories, (iv) the constraint on rapid gap interpretation is derived from restrictions on merger enforced by left corner constraint, (v) off-line / on-line contrasts of wh-scope interpretation are also explained by early prediction of question marker given wh-element, and (vi) Subjacency governs (wh-)scrambling, which is an overt movement operation. The psycholinguistics experiments, which take advantage of the strict verb-final property of Japanese, provide evidence that the real-time formation of filler-gap dependencies is driven by the satisfaction of grammatical constraints, rather than simply by the need to create a gap. They also find a filler-gap relation prior to the linearly first verb, suggesting that structure-building operates incrementally even when critical lexical heads are delayed. The experimental findings imply that there is a case where reanalysis is allowed even though it is not forced. This unforced reanalysis contrasts with a view of reanalysis as a last resort operation. A feature-based left-corner parsing model developed in this thesis can not only predict the time course of gap-creation in head-final structures and incremental structure-building, but also can distinguish between contrasting views of reanalysis. In addition, it accounts for why the gap is postulated in a grammatically legitimate position ‘as soon as possible’. Under the current system, furthermore, the same grammatical principles underlying the local relation of a wh-element and a wh-scope marker also predict both on-line and off-line wh-scope interpretations observed here. The model thus shows that the competence grammar operates dynamically as part of the parsing process.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
The aim of this thesis is to explain how grammar relates to real-time comprehension of wh-dependencies. It proposes an explanatory model which makes minimal assumptions to derive the facts seen in both performance and comprehension. The assumptions are twofold. One is that grammatical constraints drive learning and parsing theory. The other is that incrementality and efficiency force left corner analysis. The proposed model can explain the following, heretofore unrelated, facts; (i) the time course of gap-creation for filler-gap dependencies suggests incremental gap-positing, (ii) structure-building is incremental in head-final structures, (iii) the insertion principle forced by the left corner constraint derives unforced reanalysis only for predicted categories, (iv) the constraint on rapid gap interpretation is derived from restrictions on merger enforced by left corner constraint, (v) off-line / on-line contrasts of wh-scope interpretation are also explained by early prediction of question marker given wh-element, and (vi) Subjacency governs (wh-)scrambling, which is an overt movement operation. The psycholinguistics experiments, which take advantage of the strict verb-final property of Japanese, provide evidence that the real-time formation of filler-gap dependencies is driven by the satisfaction of grammatical constraints, rather than simply by the need to create a gap. They also find a filler-gap relation prior to the linearly first verb, suggesting that structure-building operates incrementally even when critical lexical heads are delayed. The experimental findings imply that there is a case where reanalysis is allowed even though it is not forced. This unforced reanalysis contrasts with a view of reanalysis as a last resort operation. A feature-based left-corner parsing model developed in this thesis can not only predict the time course of gap-creation in head-final structures and incremental structure-building, but also can distinguish between contrasting views of reanalysis. In addition, it accounts for why the gap is postulated in a grammatically legitimate position ‘as soon as possible’. Under the current system, furthermore, the same grammatical principles underlying the local relation of a wh-element and a wh-scope marker also predict both on-line and off-line wh-scope interpretations observed here. The model thus shows that the competence grammar operates dynamically as part of the parsing process. |
Kim, Meesook: A cross-linguistic perspective on the acquisition of locative verbs. University of Delaware, 1999. @phdthesis{Kim1999b,
title = {A cross-linguistic perspective on the acquisition of locative verbs},
author = {Meesook Kim},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/kim1999.pdf, A cross-linguistic perspective on the acquisition of locative verbs},
year = {1999},
date = {1999-08-01},
school = {University of Delaware},
abstract = {The main goal of this thesis is to determine what kind of learning mechanism allows children to reach adult-like knowledge of their native language, focusing specifically on a cross-linguistic comparison of the syntax and semantics of locative verbs. As a possible learning mechanism, innate and universal “linking rules” have been suggested to solve the learnability problem of how children can learn which verbs allow syntactic alternation and which verbs do not. If consistent mappings between syntax and semantics are universal, and if children can take advantage of them, then learning the meanings of verbs and their syntactic possibilities could be made easier. However, the existence of cross-linguistic variation in syntax-semantics mappings may pose a challenge for learning theories based on universal mappings. In Chapters 2 and 3, I characterize to what extent there are universal syntax- semantics correspondences, and to what there are language-specific syntax-semantics correspondences across languages, in terms of the syntax of locative verbs. A cross- linguistic survey of locative verb syntax in 13 languages shows that across languages, some syntax-semantics correspondences appear to be universal, some correspondences appear to apply only within one of two broad language groups, and some correspondences appear to involve idiosyncratic language-by-language variation. More specifically, I show that cross-linguistic variation in the syntax of locative verbs is quite restricted, dividing languages into two basic classes. Korean-type languages have a very simple pattern for locative verbs. All locative verbs allow Figure frames and there are no Non-alternating Ground verbs in these languages. In English-type languages, basic change-of-state verbs always allow Ground frames. Furthermore, I show that certain aspects of locative verb syntax correlate with an independent morphological property, namely the availability of V-V Compounding or verb serialization. This simple morphological cue may help children to figure out the properties of locative verbs in their target language. In Chapter 4, I show how much children have learned about the syntax of locative verbs by age 3-4, and how consistent syntax-semantics correspondences can assist children learn the syntax of locative verbs, given potential problems raised by cross- linguistic variation. This is shown through an elicited production task with child and adult speakers of both English and Korean. In addition to learning mechanisms based on linking rules, I examine another potential learning mechanism based on distributional properties of the input, in order to find out what information is available in the input to learners. Finally, I show to what extent learning strategies based on universal mappings can and cannot help children to succeed in learning the syntax of locative verbs, and to what extent learning strategies based on the use of distributional properties of the input can assist children to reach adult-like knowledge of their target language.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
The main goal of this thesis is to determine what kind of learning mechanism allows children to reach adult-like knowledge of their native language, focusing specifically on a cross-linguistic comparison of the syntax and semantics of locative verbs. As a possible learning mechanism, innate and universal “linking rules” have been suggested to solve the learnability problem of how children can learn which verbs allow syntactic alternation and which verbs do not. If consistent mappings between syntax and semantics are universal, and if children can take advantage of them, then learning the meanings of verbs and their syntactic possibilities could be made easier. However, the existence of cross-linguistic variation in syntax-semantics mappings may pose a challenge for learning theories based on universal mappings. In Chapters 2 and 3, I characterize to what extent there are universal syntax- semantics correspondences, and to what there are language-specific syntax-semantics correspondences across languages, in terms of the syntax of locative verbs. A cross- linguistic survey of locative verb syntax in 13 languages shows that across languages, some syntax-semantics correspondences appear to be universal, some correspondences appear to apply only within one of two broad language groups, and some correspondences appear to involve idiosyncratic language-by-language variation. More specifically, I show that cross-linguistic variation in the syntax of locative verbs is quite restricted, dividing languages into two basic classes. Korean-type languages have a very simple pattern for locative verbs. All locative verbs allow Figure frames and there are no Non-alternating Ground verbs in these languages. In English-type languages, basic change-of-state verbs always allow Ground frames. Furthermore, I show that certain aspects of locative verb syntax correlate with an independent morphological property, namely the availability of V-V Compounding or verb serialization. This simple morphological cue may help children to figure out the properties of locative verbs in their target language. In Chapter 4, I show how much children have learned about the syntax of locative verbs by age 3-4, and how consistent syntax-semantics correspondences can assist children learn the syntax of locative verbs, given potential problems raised by cross- linguistic variation. This is shown through an elicited production task with child and adult speakers of both English and Korean. In addition to learning mechanisms based on linking rules, I examine another potential learning mechanism based on distributional properties of the input, in order to find out what information is available in the input to learners. Finally, I show to what extent learning strategies based on universal mappings can and cannot help children to succeed in learning the syntax of locative verbs, and to what extent learning strategies based on the use of distributional properties of the input can assist children to reach adult-like knowledge of their target language. |
Schneider, David: Parsing and incrementality. University of Delaware, 1999. @phdthesis{Schneider1999,
title = {Parsing and incrementality},
author = {David Schneider},
url = {http://www.colinphillips.net/wp-content/uploads/2014/08/schneider1999.pdf, Parsing and incrementality},
year = {1999},
date = {1999-08-01},
school = {University of Delaware},
abstract = {There is a great deal of evidence that language comprehension occurs very rapidly. To account for this, it is widely, but not universally, assumed in the psycholinguistic literature that every word of a sentence is integrated into a syntactic representation of the sentence as soon as the word is encountered. This means that it is not possible to wait for subsequent words to provide information to guide a word s initial attachment into syntactic structure. In this dissertation I show how syntactic structures can be built on a word-by-word incremental basis. This work is motivated by the desire to explain how structure can be built incrementally. A psycholinguistically plausible theory of parsing should generalize to all languages. In this work I show not only how head-initial languages like English can be parsed incrementally, but also how head-final languages like Japanese and German can be parsed incrementally. One aspect of incremental parsing that is particularly troublesome in head-final languages is that it is not always clear how a constituent should be structured in advance of the phrase-final heads. There is a significant amount of temporary ambiguity in head-final languages related to the fact that heads of constituents are not available until the end of the phrase. In this work I show that underspecification of the features of a head allows for incremental structuring of the input in head-final structures, while still retaining the temporary ambiguity that is so common in these languages. The featural underspecification allowed by this system is extended to categorial features; I do not assume that every head must always be specified for its category. I assume that the incremental parser builds structures in accord with the principles of the grammar. In other words, there should be no need to submit a structure built by the parser to a separate grammar module to determine whether or not the sentence obeys the grammar. As one aspect of this, I show how wh-movement phenomena can be accommodated within the theory. As part of the treatment of wh-movement, constraints on wh-movement are incorporated into the system, thereby allowing the difference between grammatical and ungrammatical wh-movement to be captured in the parse tree. In addition to being incremental and cross-linguistically generalizable, a parsing theory should account for the rest of human parsing behavior. I show that a number of the structurally-motivated parsing heuristics can be accommodated within the general parsing theory presented here. As part of the investigation of the incremental parser, experimental evidence is presented that establishes a preference for structure-preserving operations in the face of temporary ambiguity. In particular, the experiments show that once a commitment has been made to a particular analysis of a verbal argument, there is a preference to avoid reanalyzing the argument. This preference holds even though the reanalysis is not particularly difficult, and the analysis that is adopted in preference to the reanalysis disobeys a general parsing preference for attachments to recent material. Thus, it appears that existing structural assumptions are rejected only as a last resort. Finally, to demonstrate the theory is explicit enough to make specific predictions, I implement portions of the theory as a computer program.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
There is a great deal of evidence that language comprehension occurs very rapidly. To account for this, it is widely, but not universally, assumed in the psycholinguistic literature that every word of a sentence is integrated into a syntactic representation of the sentence as soon as the word is encountered. This means that it is not possible to wait for subsequent words to provide information to guide a word s initial attachment into syntactic structure. In this dissertation I show how syntactic structures can be built on a word-by-word incremental basis. This work is motivated by the desire to explain how structure can be built incrementally. A psycholinguistically plausible theory of parsing should generalize to all languages. In this work I show not only how head-initial languages like English can be parsed incrementally, but also how head-final languages like Japanese and German can be parsed incrementally. One aspect of incremental parsing that is particularly troublesome in head-final languages is that it is not always clear how a constituent should be structured in advance of the phrase-final heads. There is a significant amount of temporary ambiguity in head-final languages related to the fact that heads of constituents are not available until the end of the phrase. In this work I show that underspecification of the features of a head allows for incremental structuring of the input in head-final structures, while still retaining the temporary ambiguity that is so common in these languages. The featural underspecification allowed by this system is extended to categorial features; I do not assume that every head must always be specified for its category. I assume that the incremental parser builds structures in accord with the principles of the grammar. In other words, there should be no need to submit a structure built by the parser to a separate grammar module to determine whether or not the sentence obeys the grammar. As one aspect of this, I show how wh-movement phenomena can be accommodated within the theory. As part of the treatment of wh-movement, constraints on wh-movement are incorporated into the system, thereby allowing the difference between grammatical and ungrammatical wh-movement to be captured in the parse tree. In addition to being incremental and cross-linguistically generalizable, a parsing theory should account for the rest of human parsing behavior. I show that a number of the structurally-motivated parsing heuristics can be accommodated within the general parsing theory presented here. As part of the investigation of the incremental parser, experimental evidence is presented that establishes a preference for structure-preserving operations in the face of temporary ambiguity. In particular, the experiments show that once a commitment has been made to a particular analysis of a verbal argument, there is a preference to avoid reanalyzing the argument. This preference holds even though the reanalysis is not particularly difficult, and the analysis that is adopted in preference to the reanalysis disobeys a general parsing preference for attachments to recent material. Thus, it appears that existing structural assumptions are rejected only as a last resort. Finally, to demonstrate the theory is explicit enough to make specific predictions, I implement portions of the theory as a computer program. |