## Syntax as phonology: Syntactic constraints as string constraints

🕑 12 min • 👤 Thomas Graf • 📆 September 29, 2019 in Tutorials • 🏷 subregular, syntax, locality, c-command, constraints, islands

The previous post in this series discussed the lay of the (is)land from the perspective of TSL (I’m so disappointed in myself for not making this pun last time; better late than never). I mentioned that the TSL view cannot handle all island constraints. Sometimes, we need an alternative approach. But this alternative approach doesn’t come out of nowhere. It is also what we need for all kinds of licensing constraints, and it also handles restrictions on movement that are not island constraints.

## Islands are unexpectedly expected

🕑 6 min • 👤 Thomas Graf • 📆 September 23, 2019 in Tutorials • 🏷 subregular, syntax, locality, Move, islands

In the previous post we saw Merge is SL-2 over dependency trees, and Move is TSL-2. For every movement feature f we project a separate tier that contains only lexical items that have a licensor feature f+ or a licensee feature f-. A tier is well-formed iff every lexical item with a licensee feature has a mother with a licensor feature, and every lexical item with a licensor feature has exactly one lexical item among its daughters that carries a licensee feature. It’s a pretty simple system. Despite that simplicity, it predicts a fundamental aspect of movement: island effects!

## The subregular complexity of Merge and Move

🕑 9 min • 👤 Thomas Graf • 📆 September 18, 2019 in Tutorials • 🏷 subregular, syntax, locality, strictly local, tier-based strictly local, Minimalist grammars, Merge, Move

Alright, syntax. Things are gonna get a bit more… convoluted? Nah, interesting! In principle we’ll see a lot of the same things as in phonology, and that’s kind of the point: phonology and syntax are actually very similar. But syntax isn’t quite as exhibitionist as phonology, it doesn’t show off its subregular complexity in the open for the world to see. So the first thing we’ll need is a suitable representation. Once we have that, it’s pretty much phonology all over again, but now with trees.

## Some musings on corpora

🕑 9 min • 👤 Thomas Graf • 📆 September 05, 2019 in Discussions • 🏷 syntax, corpus linguistics, Minimalist grammars, Combinatory categorial grammar

Pro tip: Don’t start a multi-part series of posts on locality right before the beginning of the semester and when you have a pile of papers to review. On the upside, this will give you guys some extra time to digest all the concepts in the three previous posts. In the meantime, here’s a quick and dirty post on corpus linguists and why it should be part of our syntax curriculum. I didn’t even proofread it, so beware.

## KISSing syntax

🕑 7 min • 👤 Thomas Graf • 📆 July 12, 2019 in Discussions • 🏷 methodology, syntax

Here’s a question I first heard from Hans-Martin Gärtner many years ago. I don’t remember the exact date, but I believe it was in 2009 or 2010. We both happened to be in Berlin, chowing down on some uniquely awful sandwiches. Culinary cruelties notwithstanding the conversation was very enjoyable, and we quickly got to talk about linguistics as a science, at which point Hans-Martin offered the following observation (not verbatim):

It’s strange how linguistic theories completely lack modularity. In other sciences, each phenomenon gets its own theory, and the challenge lies in unifying them.

Back then I didn’t share his sentiment. After all, phonology, morphology, and syntax each have their own theory, and eventually we might try to unify them (an issue that’s very dear to me). But the remark stuck with me, and the more I’ve thought about it in the last few years the more I have to side with Hans-Martin.

## More observations on privative features

🕑 7 min • 👤 Thomas Graf • 📆 June 17, 2019 in Discussions • 🏷 features, privativity, phonology, syntax, transductions

In an earlier post I looked at privativity in the domain of feature sets: given a collection of features, what conditions must be met by their extensions in order for these features to qualify as privative. But that post concluded with the observation that looking at the features in isolation might be a case of the dog barking up the wrong tree. Features are rarely of interest on their own, what matters is how they interact with the rest of the grammatical machinery. This is the step from a feature set to a feature system. Naively, one might expect that a privative feature set gives rise to a privative feature system. But that’s not at all the case. The reason for that is easy to explain yet difficult to fix.

## Some observations on privative features

🕑 9 min • 👤 Thomas Graf • 📆 June 11, 2019 in Discussions • 🏷 features, privativity, phonology, syntax

One topic that came up at the feature workshop is whether features are privative or binary (aka equipollent). Among mathematical linguists it’s part of the general folklore that there is no meaningful distinction between the two. Translating from a privative feature specification to a binary one is trivial. If we have three features $$f$$, $$g$$, and $$h$$, then the privative bundle $$\{f, g\}$$ is equivalent to $$[+f, +g, -h]$$. In the other direction, we can make binary features privative by simply interpreting the $$+$$/$$-$$ as part of the feature name. That is to say, $$-f$$ isn’t a feature $$f$$ with value $$-$$, it’s simply the privative feature $$\text{minus} f$$. Some arguments add a bit of sophistication to this, e.g. the Boolean algebra perspective in Keenan & Moss’s textbook Mathematical Structures in Language. So far so good unsatisfactory.

## Underappreciated arguments: The inverted T-model

🕑 9 min • 👤 Thomas Graf • 📆 May 15, 2019 in Discussions • 🏷 syntax, transductions, bimorphisms, T-model

There’s many conceptual pillars of linguistics that are, for one reason or another, considered contentious outside the field. This includes the competence/performance split, the grammar/parser dichotomy, underlying representations, or the inverted T-model. These topics have been discussed to death, but they keep coming up. Since it’s tiring to hear the same arguments over and over again, I figure it’d be interesting to discuss some little known ones that are rooted in computational linguistics. This will be an ongoing series, and its inaugural entry is on the connection between the T-model and bimorphisms.