MGs do not struggle (as much as you think) with multiple wh-movement

🕑 15 min • 👤 Thomas Graf • 📆 July 23, 2021 in Discussions • 🏷 Minimalist grammars, movement, multiple wh-movement, transductions, first-order logic

In February I had a nice chat with Bob Frank and Tim Hunter regarding their SCiL paper on comparing tree-construction methods across mildly context-sensitive formalisms. Among other things, this paper reiterates the received view that MGs cannot handle unbounded multiple wh-movement. That is certainly true for standard MGs as defined in Stabler (1997), but my argument was that this is due to what may now be considered an idiosyncrasy of the definition. We can relax that definition to allow for multiple wh-movement while preserving essential formal properties of MGs. However, friendly chats aren’t a good format for explaining this in detail, so I promised them an Outdex post with some math. Well, 5 months later, I finally make good on my promise.


Continue reading

Logical transductions: Bats, butterflies, and the paradox of an almighty God

🕑 14 min • 👤 Thomas Graf • 📆 September 21, 2020 in Tutorials • 🏷 formal language theory, transductions, subregular, first-order logic

Since we recently a had a post about Engelfriet’s work on transductions and logic, I figured I’d add a short tutorial that combines the two and talks a bit about logical transductions. I won’t touch on concrete linguistic issues in this post, but I will briefly dive into some implications for how MGs push PF and LF directly into “syntax” (deliberate scare quotes). I also have an upcoming post on representations and features that is directly informed by the logical transduction framework. So if you don’t read anything here unless it engages directly with linguistics, you might still want to make an exception this time, even if today’s post is mostly logic and formulas.


Continue reading

More observations on privative features

🕑 7 min • 👤 Thomas Graf • 📆 June 17, 2019 in Discussions • 🏷 features, privativity, phonology, syntax, transductions

In an earlier post I looked at privativity in the domain of feature sets: given a collection of features, what conditions must be met by their extensions in order for these features to qualify as privative. But that post concluded with the observation that looking at the features in isolation might be a case of the dog barking up the wrong tree. Features are rarely of interest on their own, what matters is how they interact with the rest of the grammatical machinery. This is the step from a feature set to a feature system. Naively, one might expect that a privative feature set gives rise to a privative feature system. But that’s not at all the case. The reason for that is easy to explain yet difficult to fix.


Continue reading

Features and the power of representations

🕑 13 min • 👤 Thomas Graf • 📆 June 06, 2019 in Discussions • 🏷 features, constraints, representations, generative capacity, subregular, strictly local, transductions

As you might have gleaned from my previous post, I’m not too fond of features, but I haven’t really given you a reason for that. It is actually straight-forward: features lower complexity. By itself, that is actually a useful property. Trees lower the complexity of syntax, and nobody (or barely anybody) uses that as an argument that we should use strings. Distributing the workload between representations and operations/constraints over these representations is considered a good thing. Rightfully so, because factorization is generally a good idea.

But there is a crucial difference between trees and features. We actually have models of how trees are constructed from strings — you might have heard of them, they’re called parsers. And we have some ways of measuring the complexity of this process, e.g. asymptotic worst-case complexity. We lack a comparable theory for features. We’re using an enriched representation without paying attention to the computational cost of carrying out this enrichment. That’s no good, we’re just cheating ourselves in this case. Fortunately, listening to people talk about features for 48h at the workshop gave me an epiphany, and I’m here to share it with you.


Continue reading

Underappreciated arguments: The inverted T-model

🕑 9 min • 👤 Thomas Graf • 📆 May 15, 2019 in Discussions • 🏷 syntax, transductions, bimorphisms, T-model

There’s many conceptual pillars of linguistics that are, for one reason or another, considered contentious outside the field. This includes the competence/performance split, the grammar/parser dichotomy, underlying representations, or the inverted T-model. These topics have been discussed to death, but they keep coming up. Since it’s tiring to hear the same arguments over and over again, I figure it’d be interesting to discuss some little known ones that are rooted in computational linguistics. This will be an ongoing series, and its inaugural entry is on the connection between the T-model and bimorphisms.


Continue reading