Grammar and Cognitive Architecture
What is a mental grammar? We have to answer this basic question if we want to give a more detailed account of the neurocognitive structures and processes involved in language. A standard view in linguistics for the past 50 years is that a mental grammar is a task-independent and time-independent system of abstract knowledge, which should be distinguished from task-specific real-time systems that serve comprehension and production. (Sometimes the competence-performance distinction arises in this context, though I find it unhelpful.) This is a serious hypothesis, and something like this is almost certainly true for mathematical knowledge — but I believe that it is incorrect for language. I think that it’s preferable to understand the mental grammar as a real-time computational system. I also think that the evidence supports this view. This issue underlies much of our research for the past 20 years. A good orientation can be found in recent papers with Shevaun Lewis. (Phillips 1996; Phillips 2003; Phillips 2013; Lewis & Phillips 2013; Phillips & Lewis, in press)
If the mental grammar is to be understood as a real-time computational system, does this have any impact for folks pursuing traditional linguistic questions? Sometimes, yes. In early work I argued that some fundamental problems in syntactic constituency can be solved once we assume that structures are built incrementally from left to right. The basic observation is that the word-by-word growth of a structure leads to metamorphosis: sequences that correspond to constituents at one step in the derivation cease to be constituents as new material is added. This can lead to a principled account of why different syntactic diagnostics yield apparently conflicting results (Phillips 1996, 2003). I haven’t developed this line of work further in recent years, mostly because we’ve been so consumed by other parts of the problem. There has been some subsequent sniping over specific analyses that I proposed, but to my knowledge nobody has taken on the challenge of giving a better account of constituency conflicts. A recent paper attempts a synthesis of the state of play in that domain (Phillips & Lewis 2013). A number of other linguists have presented additional arguments in favor of left-to-right grammatical derivations.
Some generalizations about linguistic acceptability should not be blamed on the grammar. But which ones? The literature contains a number of independent debates about whether some phenomenon should be understood as epiphenomena of language processing difficulty or semantic incoherence. The most well-known debate involves syntactic islands, whose grammatical status has been questioned almost since they were first proposed in the 1960s. But there are other less widely known debates. We should not want vague appeals to “processing difficulty” to serve as a scientific rug for sweeping recalcitrant facts under. We have studied a number of different phenomena where reductionist accounts have been proposed, and we’ve tried to develop more systematic tests and diagnostics. Each case should be taken on its merits, and our findings support or challenge reductionist accounts in different phenomena. So I find it mildly amusing that I’ve been harshly criticized for advocating a reductionist position in some cases, and harshly criticized for taking the opposite position in other cases. (Phillips 2013; Phillips 2013; Sprouse et al. 2012; Sprouse et al. 2013; Wagers et al. 2009).
Acceptability and Armchair Linguistics
One of the attractive features of linguistics is that a lot of the facts are “cheap”. You can gather dozens of acceptability judgments before breakfast and be deep into theory development by lunchtime. This means that linguists tend to spend their time worrying about different parts of the scientific process than do lab scientists. But perhaps this is all too easy, and a lot of the ‘facts’ that linguists seek to explain are bogus. This concern has been around for decades, but it has become prominent recently, in the context of the broader Repli-gate crisis in psychology and the availability of crowd-sourcing tools for internet experimentation. Some friends have argued that linguistics would be taken more seriously as a science if it replaced traditional data collection approaches with quantitative acceptability judgment methods. I disagree, and I’ve waded into the fight, sometimes with my tongue firmly in my cheek (Phillips 2010). We can’t reasonably be accused of shying away from quantitative experimental methods for studying grammar, and we test more people in large-scale acceptability studies than just about anyone (last count: 1800 participants in 50 experiments). I certainly think that linguistics has problems, but this is not the real problem, and that is not the real solution.
Why does linguistics need the experimental methods that our team likes to use? One popular answer is that we’re useful because our new-fangled tools can provide evidence to decide among competing theoretical proposals. If traditional methods cannot settle the dispute over empty categories in unbounded dependencies, perhaps a psycholinguist can tell us the answer. And perhaps an experiment can resolve the question of whether there is structure in ellipsis sites. (Some people even refer to this as ’empirical’ evidence, contrasting with traditional ‘theoretical’ evidence. This drives me nuts. It’s all empirical!) It’s a nice idea, but I’m afraid it rarely works. Time-based methods are generally good for evaluating time-based hypotheses, and most traditional linguistic analyses disavow time-based hypotheses. In general, I think that our methods are useful not for answering questions that we already had, but for answering questions that weren’t even on our radar previously. (Phillips & Wagers 2007; Phillips & Parker 2014)
Every independent point of linguistic variation represents a choice point for the learner. Therefore, we should want (i) to minimize the number of choice points that learners face, and (ii) to ensure that every choice point corresponds to a readily observable fact. Otherwise, children are going to be in trouble (they’re special, but language learning is not magic). This problem formulation leads to a natural research partnership between comparative linguistics and language acquisition, and it was articulated clearly in the 1980s Principles and Parameters framework (Chomsky 1981). But the partnership was always a bit of an arms-length relationship, and more recently it has become a full-on estrangement. The 1980s program delivered little for working developmentalists, and 30 years of research on typology and micro-variation has led to despair over finding the neat parametric clusters that people dreamed of in the 1980s. I think this despair is premature, and that the field has been misled by a focus on variation in easy-to-observe properties, which dominate typological and dialect studies. It’s important to focus on understanding (i) what is “hard to observe”, and (ii) whether we find micro-variation in those hard-to-observe properties. This calls for research on children’s learning experience, and how they process and make inferences from that experience. But it also calls for a specific type of comparative linguistics, and it explains my fascination with cores-language variation in prima facie hard-to-observe phenomena such as islands, scope, and aspect. (Kush 2013; Goro 2007; Chacon et al. 2014; Kazanina & Phillips 2007)
Publications on Grammar and Cognitive Architecture
including PhD dissertations supervised
| Phillips, Colin; Gaston, Phoebe; Huang, Nick; Muller, Hanna: Theories all the way down: remarks on "theoretical" and "experimental" linguistics. In: 2019, (To appear in Cambridge Handbook of Experimental Syntax. G. Goodall, ed.). (Type: Book Chapter | | | )|
It is common in linguistics to draw a contrast between “theoretical” and “experimental” research, following terminology used in other fields of science. Researchers who pursue experimental research are often asked about the theoretical consequences of their work. Such questions generally equate “theoretical” with theories at a specific high level of abstraction, guided by the questions of traditional linguistic theory. These theories focus on the structural representation of sentences in terms of discrete units, without regard to order, time, finer-grained memory encoding, or the neural circuitry that supports linguistic computation. But there is little need for the high level descriptions to have privileged status. There are interesting theoretical questions at all levels of analysis. A common experience in our group’s work is that we embark on a project guided by its apparent relevance to high-level theoretical debates. We then discover that this relevance depends on linking assumptions that are not as straightforward as we initially thought. And then we discover new theoretical questions at lower levels of analysis that we had not even been aware of previously. We illustrate this using examples from many different lines of experimental research on syntactic issues.
| Momma, Shota; Phillips, Colin: The relationship between parsing and generation. In: Annual Review of Linguistics, vol. 4, pp. 233-254, Forthcoming. (Type: Journal Article | | | )|
Humans use their linguistic knowledge in at least two ways. On one hand, they use their linguistic knowledge to convey what they mean to others or to themselves. On the other hand, they use their linguistic knowledge to understand what others say or they themselves say. In either case, they must assemble the syntactic structures of sentences in a systematic fashion, in accordance with the grammar of their language. In this article, we advance the view that a single mechanism for building sentence structure may be sufficient for structure building in comprehension and production. We argue that differing behaviors reduce to differences in the available information in either task. This view has broad implications for the architecture of the human language system, and provides a useful framework for integrating largely independent research programs on comprehension and production by both constraining the models and uncovering new questions that can drive further research.
|Nick Huang Phoebe Gaston, Colin Phillips: The logic of syntactic priming and acceptability judgments. In: Behavioral and Brain Sciences, Forthcoming, (commentary on target article by Branigan and Pickering). (Type: Journal Article | | | )|
A critical flaw in Branigan and Pickering’s advocacy of structural priming is the absence of a theory of priming. This undermines their claims about the value of priming as a methodology. In contrast, acceptability judgments enable clearer inferences about structure. It is important to engage thoroughly with the logic behind different structural diagnostics.
| Momma, Shota: Parsing, generation, and grammar. University of Maryland, 2016. (Type: PhD Thesis | | | )|
Humans use their grammatical knowledge in more than one way. On one hand, they use it to understand what others say. On the other hand, they use it to say what they want to convey to others (or to themselves). In either case, they need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same cognitive mechanisms. Currently, the field of psycholinguistics implicitly adopts the position that they are supported by different cognitive mechanisms, as evident from the fact that most psycholinguistic models seek to explain either comprehension or production phenomena. The potential existence of two independent cognitive systems underlying linguistic performance doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. This thesis thus aims to unify the structure building system in comprehension, i.e., parser, and the structure building system in production, i.e., generator, into one, so that the linking theory between knowledge and performance can also be unified into one. I will discuss and unify both existing and new data pertaining to how structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is at least a plausible research enterprise. In Chapter 1, I will discuss the previous and current views on how parsing and generation are related to each other. I will outline the challenges for the current view that the parser and the generator are the same cognitive mechanism. This single system view is discussed and evaluated in the rest of the chapters. In Chapter 2, I will present new experimental evidence suggesting that the grain size of the pre-compiled structural units (henceforth simply structural units) is rather small, contrary to some models of sentence production. In particular, I will show that the internal structure of the verb phrase in a ditransitive sentence (e.g., The chef is donating the book to the monk) is not specified at the onset of speech, but is specified before the first internal argument (the book) needs to be uttered. I will also show that this timing of structural processes with respect to the verb phrase structure is earlier than the lexical processes of verb internal arguments. These two results in concert show that the size of structure building units in sentence production is rather small, contrary to some models of sentence production, yet structural processes still precede lexical processes. I argue that this view of generation resembles the widely accepted model of parsing that utilizes both top-down and bottom-up structure building procedures. In Chapter 3, I will present new experimental evidence suggesting that the structural representation strongly constrains the subsequent lexical processes. In particular, I will show that conceptually similar lexical items interfere with each other only when they share the same syntactic category in sentence production. The mechanism that I call syntactic gating, will be proposed, and this mechanism characterizes how the structural and lexical processes interact in generation. I will present two Event Related Potential (ERP) experiments that show that the lexical retrieval in (predictive) comprehension is also constrained by syntactic categories. I will argue that the syntactic gating mechanism is operative both in parsing and generation, and that the interaction between structural and lexical processes in both parsing and generation can be characterized in the same fashion. In Chapter 4, I will present a series of experiments examining the timing at which verbs’ lexical representations are planned in sentence production. It will be shown that verbs are planned before the articulation of their internal arguments, regardless of the target language (Japanese or English) and regardless of the sentence type (active object- initial sentence in Japanese, passive sentences in English, and unaccusative sentences in English). I will discuss how this result sheds light on the notion of incrementality in generation. In Chapter 5, I will synthesize the experimental findings presented in this thesis and in previous research to address the challenges to the single system view I outlined in Chapter 1. I will then conclude by presenting a preliminary single system model that can potentially capture both the key sentence comprehension and sentence production data without assuming distinct mechanisms for each.
| Chacón, Dustin Alfonso: Comparative psychosyntax. University of Maryland, 2015. (Type: PhD Thesis | | | )|
Every difference between languages is a “choice point” for the syntactician, psycholinguist, and language learner. The syntactician must describe the differences in representations that the grammars of different languages can assign. The psycholinguist must describe how the comprehension mechanisms search the space of the representations permitted by a grammar to quickly and effortlessly understand sentences in real time. The language learner must determine which representations are permitted in her grammar on the basis of her primary linguistic evidence. These investigations are largely pursued independently, and on the basis of qualitatively different data. In this dissertation, I show that these investigations can be pursued in a way that is mutually informative. Specifically, I show how learnability concerns and sentence processing data can constrain the space of possible analyses of language differences.
In Chapter 2, I argue that “indirect learning”, or abstract, cross-contruction syntactic inference, is necessary in order to explain how the learner determines which complementizers can co-occur with subjects gaps in her target grammar. I show that adult speakers largely converge in the robustness of the that-trace effect, a constraint on complementation complementizers and subject gaps observed in languages like English, but unobserved in languages like Spanish or Italian. I show that realistic child-directed speech has very few long-distance subject extractions in English, Spanish, and Italian, implying that learners must be able to distinguish these different hypotheses on the basis of other data. This is more consistent with more conservative approaches to these phenomena (Rizzi, 1982), which do not rely on abstract complementizer agreement like later analyses (Rizzi, 2006; Rizzi & Shlonsky, 2007). In Chapter 3, I show that resumptive pronoun dependencies inside islands in English are constructed in a non-active fashion, which contrasts with recent findings in Hebrew (Keshev & Meltzer-Asscher, ms). I propose that an expedient explanation of these facts is to suppose that resumptive pronouns in English are ungrammatical repair devices (Sells, 1984), whereas resumptive pronouns in island contexts are grammatical in Hebrew. This implies that learners must infer which analysis is appropriate for their grammars on the basis of some evidence in linguistic environment. However, a corpus study reveals that resumptive pronouns in islands are exceedingly rare in both languages, implying that this difference must be indirectly learned. I argue that theories of resumptive dependencies which analyze resumptive pronouns as incidences of the same abstract construction (e.g., Hayon 1973; Chomsky 1977) license this indirect learning, as long as resumptive dependencies in English are treated as ungrammatical repair mechanisms. In Chapter 4, I compare active dependency formation processes in Japanese
and Bangla. These findings suggest that filler-gap dependencies are preferentially resolved with the first position available. In Japanese, this is the most deeply embedded clause, since embedded clauses always precede the embedding verb (Aoshima et al., 2004; Yoshida, 2006; Omaki et al., 2014). Bangla allows a within-language comparison of the relationship between active dependency formation processes and word order, since embedded clauses may precede or follow the embedding verb (Bayer, 1996). However, the results from three experiments in Bangla are mixed, suggesting a weaker preference for a linearly local resolution of filler-gap dependencies, unlike in Japanese. I propose a number of possible explanations for these facts, and discuss how differences in processing profiles may be accounted for in a variety of ways.
In Chapter 5, I conclude the dissertation.
| Lewis, Shevaun; Phillips, Colin: Aligning grammatical theories and language processing models. In: Journal of Psycholinguistic Research, vol. 44, no. 1, pp. 27-46, 2015. (Type: Journal Article | | | )|
We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second, how should we relate grammatical theories and language processing models to each other?
| Phillips, Colin; Parker, Dan: The psycholinguistics of ellipsis. In: Lingua, vol. 151, pp. 78-95, 2014, (published online Nov 27, 2013). (Type: Journal Article | | | )|
This article reviews studies that have used experimental methods from psycholinguistics to address questions about the representa- tion of sentences involving ellipsis. Accounts of the structure of ellipsis can be classified based on three choice points in a decision tree. First: does the identity constraint between antecedents and ellipsis sites apply to syntactic or semantic representations? Second: does the ellipsis site contain a phonologically null copy of the structure of the antecedent, or does it contain a pronoun or pointer that lacks internal structure? Third: if there is unpronounced structure at the ellipsis site, does that structure participate in all syntactic processes, or does it behave as if it is genuinely absent at some levels of syntactic representation? Experimental studies on ellipsis have begun to address the first two of these questions, but they are unlikely to provide insights on the third question, since the theoretical contrasts do not clearly map onto timing predictions. Some of the findings that are emerging in studies on ellipsis resemble findings from earlier studies on other syntactic dependencies involving wh-movement or anaphora. Care should be taken to avoid drawing conclusions from experiments about ellipsis that are known to be unwarranted in experiments about these other dependencies.
| Phillips, Colin; Lewis, Shevaun: Derivational order in syntax: evidence and architectural consequences. In: Studies in Linguistics, vol. 6, pp. 11-47, 2013. (Type: Journal Article | | | )|
Most formal syntactic theories propose either that syntactic representations are the product of a derivation that assembles words and phrases in a bottom-to-top and typically right-to-left fashion, or that they are not constructed in an ordered fashion. Both of these views contrast with the (roughly) left-to-right order of structure assembly in language use, and with some recent claims that syntactic derivations and real-time structure-building are essentially the same. In this article we discuss the mentalistic commitments of standard syntactic theories, distinguishing literalist, formalist, and extensionalist views of syntactic derivations. We argue that existing evidence favors the view that human grammatical representations are the product of an implementation dependent system, i.e., syntactic representations are assembled in a consistent order, as claimed by grammatical models that are closely aligned with real-time processes. We discuss the evidence for left- to-right syntactic derivations, and respond to critiques of a proposal that the conflicts between the results of constituency diagnostics can be explained in terms of timing.
| Phillips, Colin: Some arguments and non-arguments for reductionist accounts of syntactic phenomena. In: Language and Cognitive Processes, vol. 28, pp. 156-187, 2013. (Type: Journal Article | | | )|
Many syntactic phenomena have received competing accounts, either in terms of formal grammatical mechanisms, or in terms of independently motivated properties of language processing mechanisms (‘‘reductionist’’ accounts). A variety of different types of argument have been put forward in efforts to distinguish these competing accounts. This article critically examines a number of arguments that have been offered as evidence in favour of formal or reductionist analyses, and concludes that some types of argument are more decisive than others. It argues that evidence from graded acceptability effects and from isomorphism between acceptability judgements and on-line compre- hension profiles are less decisive. In contrast, clearer conclusions can be drawn from cases of overgeneration, where there is a discrepancy between accept- ability judgements and the representations that are briefly constructed on-line, and from tests involving individual differences in cognitive capacity. Based on these arguments, the article concludes that a formal grammatical account is better supported in some domains, and that a reductionist account fares better in other domains. Phenomena discussed include island constraints, agreement attraction, constraints on anaphora, and comparatives.
| Phillips, Colin: On the nature of island constraints. II: Language learning and innateness. In: Sprouse, Jon; Hornstein, Norbert (Ed.): Experimental syntax and island effects, pp. 132-157, Cambridge University Press, 2013. (Type: Incollection | | )|
| Phillips, Colin: On the nature of island constraints. I: Language processing and reductionist accounts. In: Sprouse, Jon; Hornstein, Norbert (Ed.): Experimental syntax and island effects, pp. 64-108, Cambridge University Press, 2013. (Type: Incollection | | )|
| Phillips, Colin: Parser-grammar relations: We don’t understand everything twice. In: Sanz, Montserrat; Laka, Itziar; Tanenhaus, Michael (Ed.): Language down the garden path: the cognitive basis for linguistic structure, pp. 294–315, Oxford University Press, 2013. (Type: Incollection | | )|
| Sprouse, Jon; Wagers, Matt; Phillips, Colin: Deriving competing predictions from grammatical approaches and reductionist approaches to island effects. In: Sprouse, Jon; Hornstein, Norbert (Ed.): Experimental syntax and island effects, pp. 21–41, Cambridge University Press, 2013. (Type: Incollection | | )|
| Sprouse, Jon; Wagers, Matt; Phillips, Colin: Working memory capacity and island effects: A reminder of the issues and of the facts. In: Language, vol. 88, pp. 401-407, 2012. (Type: Journal Article | | | )|
This article responds to a critique by Hofmeister et al. (2012).
| Sprouse, Jon; Wagers, Matt; Phillips, Colin: A test of the relation between working memory capacity and syntactic island effects. In: Language, vol. 88, pp. 82-123, 2012. (Type: Journal Article | | | )|
The source of syntactic island effects has been a topic of considerable debate within linguistics and psycholinguistics. Explanations fall into three basic categories: grammatical theories, which posit specific grammatical constraints that exclude extraction from islands; grounded theories, which posit grammaticized constraints that have arisen to adapt to constraints on learning or pars- ing; and reductionist theories, which analyze island effects as emergent consequences of non- grammatical constraints on the sentence parser, such as limited processing resources. In this article we present two studies designed to test a fundamental prediction of one of the most prominent re- ductionist theories: that the strength of island effects should vary across speakers as a function of individual differences in processing resources. We tested over three hundred native speakers of English on four different island-effect types (whether, complex NP, subject, and adjunct islands) using two different acceptability rating tasks (seven-point scale and magnitude estimation) and two different measures of working-memory capacity (serial recall and n-back). We find no evi- dence of a relationship between working-memory capacity and island effects using a variety of statistical analysis techniques, including resampling simulations. These results suggest that island effects are more likely to be due to grammatical constraints or grounded grammaticized con- straints than to limited processing resources.
| Alcocer, Pedro; Phillips, Colin: Using relational syntactic constraints in content-addressable memory architectures for sentence parsing. 2012. (Type: Unpublished | | | )|
How linguistic representations in memory are searched and retrieved during sentence processing is an active question in psycholinguistics. Much work has independently suggested that retrieval operations require constant-time computations, are susceptible to interference, and operate under the constraint of a severely limited focus of attention. These features suggest a model for sentence processing that uses a content-addressable memory (CAM) architecture that accesses items in parallel while relying on only a limited amount of privileged, fast storage. A challenge for a CAM architecture comes from the potentially unbounded configurational relation c-command (equivalent to logical scope) that plays a pervasive role in linguistic phenomena. CAM is well-suited to retrieval of elements based on inherent properties of memory chunks, but relational notions such as c-command involve the properties of pairs of nodes in a structure, rather than inherent properties of individual chunks. To explore this problem, in this paper we adopt an explicit CAM-based model of sentence processing in the context of the ACT-R computational architecture (Lewis & Vasishth, 2005, Cognitive Science, 29, 375-419). We discuss why c-command is a challenge for CAM and explore algorithms for exploiting or approximating c-command online, and discuss their consequences for the model. We identify computational problems that any attention-limited, CAM-based model would have to address.
| Phillips, Colin: Should we impeach armchair linguists. In: Japanese/Korean Linguistics, vol. 17, pp. 49–64, 2010. (Type: Journal Article | | )|
| Goro, Takuya: Language specific constraints on scope interpretation in first language acquisition. University of Maryland, 2007. (Type: PhD Thesis | | | )|
This dissertation investigates the acquisition of language-specific constraints on scope interpretation by Japanese preschool children. Several constructions in Japanese do not allow scope interpretations that the corresponding English sentences do allow. First, in Japanese transitive sentences with multiple quantificational arguments, an inverse scope interpretation is disallowed, due to the Rigid Scope Constraint. Second, Japanese logical connectives cannot be interpreted under the scope of local negation, due to their Positive Polarity. Thirdly, in Japanese infinitival complement constructions with implicative matrix verbs like wasureru (“forget”) the inverse scope interpretation is required, due to the Anti-Reconstruction Constraint. The main goal of this research is to determine how Japanese children learn these constraints on scope interpretations. To that end, three properties of the acquisition task that have an influence on the learnability of linguistic knowledge are examined: productivity, no negative evidence, and arbitrariness. The results of experimental investigations show that Japanese children productively generate scope interpretations that are never exemplified in the input. For example, with sentences that contain two quantificational arguments, Japanese children accessed inverse scope interpretations that Japanese adults do not allow. Also, Japanese children interpret the disjunction ka under the scope of local negation, which is not a possible interpretive option in the adult language. These findings clearly show that children do not acquire these scope constraints through conservative learning, and raise the question of how they learn to purge their non-adult interpretations. It is argued that input data do not provide learners with negative evidence (direct or indirect) against particular scope interpretations. Two inherent properties of input data about possible scope interpretations, data sparseness and indirectness, make negative evidence too unreliable as a basis for discovering what scope interpretation is impossible. In order to solve the learnability problems that children’s scope productivity raise, I suggest that the impossibility of their non-adult interpretations are acquired by learning some independently observable properties of the language. In other words, the scope constraints are not arbitrary in the sense that their effects are consequences of other properties of the grammar of Japanese.
| Phillips, Colin; Wagers, Matthew: Relating structure and time in linguistics and psycholinguistics. In: Oxford handbook of psycholinguistics, pp. 739–756, Oxford University Press, 2007. (Type: Incollection | | )|
| Yoshida, Masaya: Constraints and mechanisms in long-distance dependency formation. University of Maryland, 2006. (Type: PhD Thesis | | | )|
This thesis aims to reveal the mechanisms and constraints involving in long-distance dependency formation in the static knowledge of language and in real-time sentence processing. Special attention is paid to the grammar and processing of island constraints. Several experiments show that in a head-final language like Japanese global constraints like island constraints are applied long before decisive information such as verb heads and relative heads, are encountered. Based on this observation, the thesis argues that there is a powerful predictive mechanism at work behind real time sentence processing. A model of this predictive mechanism is proposed. This thesis examines the nature of several island constraints, specifically Complex NP Islands induced by relative clauses, and clausal adjunct islands. It is argued that in the majority of languages, both relative clauses and adjunct clauses are islands, but there is a small subset of languages (including Japanese, Korean and Malayalam) where extraction out of adjunct clauses seems to be allowed. Applying well-established syntactic tests to the necessary constructions in Japanese, it is established that dependencies crossing adjunct clauses are indeed created by movement operations, and still the extraction is allowed from adjuncts. Building on previous findings, the thesis turns to the investigation of the interaction between real time sentence processing and island constraints. Looking specifically at Japanese, a head-final language this thesis ask how the structure of sentences are built and what constraints are applied to the structure building process. A series of experiments shows that in Japanese, even before high-information bearing units such as verbs, relative heads or adjunct markers are encountered, the structural skeleton is built, and such pre-computed structures are highly articulated. It is shown that structural constraints on long-distance dependencies are imposed on the pre-built structure. It is further shown that this finding support the incrementality of sentence processing.
| Kazanina, Nina: The acquisition and processing of backwards anaphora. University of Maryland, 2005. (Type: PhD Thesis | | | )|
This dissertation investigates long-distance backwards pronominal dependencies (backwards anaphora or cataphora) and constraints on such dependencies from the viewpoint of language development and real-time language processing. Based on the findings from a comprehension experiment with Russian-speaking children and on real-time sentence processing data from English and Russian adults I argue for a position that distinguishes structural and non-structural constraints on backwards anaphora. I show that unlike their non-syntactic counterparts, structural constraints on coreference, in particular Principle C of the Binding Theory (Chomsky 1981), are active at the earliest stage of language development and of real-time processing. In language acquisition, the results of a truth-value judgment task with 3-6 year old Russian-speaking children reveal a striking developmental asymmetry between Principle C, a cross-linguistically consistent syntactic constraint on coreference, and a Russian-specific discourse constraint on coreference. Whereas Principle C is respected by children already at the age of three, the Russian- specific (discourse) constraint is not operative in child language until the age of five. These findings present a challenge for input-driven accounts of language acquisition and are most naturally explained in theories that admit the existence of innately specified principles that underlie linguistic representations. In real-time processing, the findings from a series of self-paced reading experiments on English and Russian show that in backwards anaphora contexts the parser initiates an active search for an antecedent for the pronoun which is limited to positions that are not subject to structural constraints on coreference, e.g. Principle C. This grammatically constrained active search mechanism associated observed in the processing of backwards anaphora is similar to the mechanism found in the processing of another type of a long-distance dependency, the wh-dependency. I suggest that the early application of structural constraints on long-distance dependencies is due to reasons of parsing efficiency rather than due to their architectural priority, as such constraints aid to restrict the search space of possible representations to be built by the parser. A computational parsing algorithm is developed that combines the constrained active search mechanism with a strict incremental left-to-right structure building procedure.
| Phillips, Colin: Linguistics and linking problems. In: Rice, Mabel; Warren, Steven (Ed.): Developmental Language Disorders: From Phenotypes to Etiologies, pp. 241-287, Lawrence Erlbaum Associates, Mahwah, NJ, 2004. (Type: Incollection | | )|
| Phillips, Colin; Lau, Ellen: Foundational issues. In: Journal of Linguistics, vol. 40, no. 03, pp. 571–591, 2004. (Type: Journal Article | | )|
| Phillips, Colin: Linear order and constituency. In: Linguistic Inquiry, vol. 34, no. 1, pp. 37–90, 2003. (Type: Journal Article | | | )|
This article presents a series of arguments that syntactic structures are built incrementally, in a strict left-to-right order. By assuming incremental structure building it becomes possible to explain the differ- ences in the range of constituents available to different diagnostics of constituency, including movement, ellipsis, coordination, scope, and binding. In an incremental derivation structure building creates new constituents, and in doing so it may destroy existing constituents. The article presents detailed evidence for the prediction of incremental grammar that a syntactic process may refer only to those constituents that are present at the point in the derivation when the process applies.
| Phillips, Colin; Lasnik, Howard: Linguistics and empirical evidence: Reply to Edelman and Christiansen. In: Trends in Cognitive Sciences, vol. 7, no. 2, pp. 61–62, 2003. (Type: Journal Article | | | )|
The main claims of Edelman and Christiansen’s (E&C’s) comment  on Lasnik’s article  are that generative grammar is built upon empirically weak (perhaps non- existent) foundations, and that generative grammar- ians aggressively resist experimental testing of their assumptions. Neither of these claims survives even brief scrutiny.
| Phillips, Colin: Syntax. In: Nadel, Lyn (Ed.): vol. Encyclopedia of Cognitive Science, no. 4, pp. 319-329, MacMillan Reference, 2003. (Type: Book Chapter | | )|
| Phillips, Colin: Constituency in deletion and movement. 2002, (Remarks on Lechner (2001)). (Type: Unpublished | | )|
| Kim, Meesook: A cross-linguistic perspective on the acquisition of locative verbs. University of Delaware, 1999. (Type: PhD Thesis | | | )|
The main goal of this thesis is to determine what kind of learning mechanism allows children to reach adult-like knowledge of their native language, focusing specifically on a cross-linguistic comparison of the syntax and semantics of locative verbs. As a possible learning mechanism, innate and universal “linking rules” have been suggested to solve the learnability problem of how children can learn which verbs allow syntactic alternation and which verbs do not. If consistent mappings between syntax and semantics are universal, and if children can take advantage of them, then learning the meanings of verbs and their syntactic possibilities could be made easier. However, the existence of cross-linguistic variation in syntax-semantics mappings may pose a challenge for learning theories based on universal mappings. In Chapters 2 and 3, I characterize to what extent there are universal syntax- semantics correspondences, and to what there are language-specific syntax-semantics correspondences across languages, in terms of the syntax of locative verbs. A cross- linguistic survey of locative verb syntax in 13 languages shows that across languages, some syntax-semantics correspondences appear to be universal, some correspondences appear to apply only within one of two broad language groups, and some correspondences appear to involve idiosyncratic language-by-language variation. More specifically, I show that cross-linguistic variation in the syntax of locative verbs is quite restricted, dividing languages into two basic classes. Korean-type languages have a very simple pattern for locative verbs. All locative verbs allow Figure frames and there are no Non-alternating Ground verbs in these languages. In English-type languages, basic change-of-state verbs always allow Ground frames. Furthermore, I show that certain aspects of locative verb syntax correlate with an independent morphological property, namely the availability of V-V Compounding or verb serialization. This simple morphological cue may help children to figure out the properties of locative verbs in their target language. In Chapter 4, I show how much children have learned about the syntax of locative verbs by age 3-4, and how consistent syntax-semantics correspondences can assist children learn the syntax of locative verbs, given potential problems raised by cross- linguistic variation. This is shown through an elicited production task with child and adult speakers of both English and Korean. In addition to learning mechanisms based on linking rules, I examine another potential learning mechanism based on distributional properties of the input, in order to find out what information is available in the input to learners. Finally, I show to what extent learning strategies based on universal mappings can and cannot help children to succeed in learning the syntax of locative verbs, and to what extent learning strategies based on the use of distributional properties of the input can assist children to reach adult-like knowledge of their target language.
| Schneider, David: Parsing and incrementality. University of Delaware, 1999. (Type: PhD Thesis | | | )|
There is a great deal of evidence that language comprehension occurs very rapidly. To account for this, it is widely, but not universally, assumed in the psycholinguistic literature that every word of a sentence is integrated into a syntactic representation of the sentence as soon as the word is encountered. This means that it is not possible to wait for subsequent words to provide information to guide a word s initial attachment into syntactic structure. In this dissertation I show how syntactic structures can be built on a word-by-word incremental basis. This work is motivated by the desire to explain how structure can be built incrementally. A psycholinguistically plausible theory of parsing should generalize to all languages. In this work I show not only how head-initial languages like English can be parsed incrementally, but also how head-final languages like Japanese and German can be parsed incrementally. One aspect of incremental parsing that is particularly troublesome in head-final languages is that it is not always clear how a constituent should be structured in advance of the phrase-final heads. There is a significant amount of temporary ambiguity in head-final languages related to the fact that heads of constituents are not available until the end of the phrase. In this work I show that underspecification of the features of a head allows for incremental structuring of the input in head-final structures, while still retaining the temporary ambiguity that is so common in these languages. The featural underspecification allowed by this system is extended to categorial features; I do not assume that every head must always be specified for its category. I assume that the incremental parser builds structures in accord with the principles of the grammar. In other words, there should be no need to submit a structure built by the parser to a separate grammar module to determine whether or not the sentence obeys the grammar. As one aspect of this, I show how wh-movement phenomena can be accommodated within the theory. As part of the treatment of wh-movement, constraints on wh-movement are incorporated into the system, thereby allowing the difference between grammatical and ungrammatical wh-movement to be captured in the parse tree. In addition to being incremental and cross-linguistically generalizable, a parsing theory should account for the rest of human parsing behavior. I show that a number of the structurally-motivated parsing heuristics can be accommodated within the general parsing theory presented here. As part of the investigation of the incremental parser, experimental evidence is presented that establishes a preference for structure-preserving operations in the face of temporary ambiguity. In particular, the experiments show that once a commitment has been made to a particular analysis of a verbal argument, there is a preference to avoid reanalyzing the argument. This preference holds even though the reanalysis is not particularly difficult, and the analysis that is adopted in preference to the reanalysis disobeys a general parsing preference for attachments to recent material. Thus, it appears that existing structural assumptions are rejected only as a last resort. Finally, to demonstrate the theory is explicit enough to make specific predictions, I implement portions of the theory as a computer program.
| Phillips, Colin: Disagreement between adults and children. In: Mendikoetxea, Amaya; Uribe-Etxebarria, Myriam (Ed.): Theoretical Issues on the Morphology-Syntax Interface, pp. 359–394, ASJU, San Sebastian, 1998. (Type: Book Chapter | | )|
| Phillips, Colin: Teaching syntax with Trees. In: Glot International, vol. 3, no. 7, 1998. (Type: Journal Article | | )|
| Phillips, Colin: Merge right: An approach to constituency conflicts. In: WCCFL XV: Proceedings of the 15th West Coast Conference on Formal Linguistics, pp. 381–395, CSLI Publications, Stanford, CA, 1997. (Type: Inproceedings | | )|
| Phillips, Colin: Order and structure. Massachusetts Institute of Technology, 1996. (Type: PhD Thesis | | | )|
The aim of this thesis is to argue for the following two main points. First, that grammars of natural language construct sentences in a strictly left-to-right fashion, i.e. starting at the beginning of the sentence and ending at the end. Second, that there is no distinction between the grammar and the parser. In the area of phrase structure, I show that the left-to-right derivations forced by the principle Merge Right can account for the apparent contradictions that different tests of constituency show, and that they also provide an explanation for why the different tests yield the results that they do. Phenomena discussed include coordination, movement, ellipsis, binding, right node raising and scope. I present a preliminary account of the interface of phonology and morphology with syntax based on left-to-right derivations. I show that this approach to morphosyntax allows for a uniform account of locality in head movement and clitic placement, explains certain directional asymmetries in phonology-syntax mismatches and head movement, and allows for a tighter connection between syntactic and phonological phrases than commonly assumed. In parsing I argue that a wide range of structural biases in ambiguity resolution can be accounted for by the single principle Branch Right, which favors building right-branching structures wherever possible. Evidence from novel and existing experimental work is presented which shows that Branch Right has broader empirical coverage than other proposed structural parsing principles. Moreover, Branch Right is not a parsing-specific principle: it is independently motivated as an economy principle of syntax in the chapters on syntax. The combination of these results from syntax and parsing makes it possible to claim that the parser and the grammar are identical. The possibility that the parser and the grammar are identical or extremely similar was explored in the early 1960s, but is widely considered to have been discredited by the end of that decade. I show that arguments against this model which were once valid no longer apply given left-to-right syntax and the view of the parser proposed here.
| Phillips, Colin: Ergative subjects. In: Gerdts, Donna; Dziwirek, Katarzyna; Burgess, Clifford (Ed.): Grammatical Relations, CSLI Publications, Stanford, CA, 1996. (Type: Incollection | | )|
| Phillips, Colin: Right association in parsing and grammar. In: Schütze, Carson; Ganger, Jennifer; Broihier, Keven (Ed.): vol. Papers on Language Acquisition and Proce, pp. 37–93, Department of Linguistics and Philosophy, Massachusetts Institute of Technology, vol. 26, 1995. (Type: Book Chapter | | )|
| Phillips, Colin; Harley, Heidi (Ed.): The Morphology-syntax Connection: Proceedings of the January 1994 MIT Workshop. 1994. (Type: Collection | )|
| Phillips, Colin: Verbal case and the nature of polysynthetic inflection. In: Proceedings of CONSOLE 2, 1994. (Type: Inproceedings | | | )|
This paper tries to resolve a conflict in the literature on the connection between ‘rich’ agreement and argument-drop. Jelinek (1984) claims that inflectional affixes in polysynthetic languages are theta-role bearing arguments; Baker (1991) argues that such affixes are agreement, bearing Case but no theta-role. Evidence from Yimas shows that both of these views can be correct, within a single language. Explanation of what kind of inflection is used where also provides us with an account of the unusual split ergative agreement system of Yimas, and suggests a novel explanation for the ban on subject incorporation, and some exceptions to the ban.
| Phillips, Colin: Conditions on agreement in Yimas. In: Bobaljik, Jonathan; Phillips, Colin (Ed.): vol. Papers on Case and Agreement I, pp. 173–213, MIT Working Papers in Linguistics, vol. 18, 1993. (Type: Book Chapter | | )|
| Bobaljik, Jonathan; Phillips, Colin (Ed.): Papers on case and agreement. vol. 18, MIT Working Papers in Linguistics, 1993. (Type: Book | )|
| Phillips, Colin (Ed.): Papers on Case and Agreement II. vol. 19, MIT Working Papers in Linguistics, 1993. (Type: Book | )|