Representations as fossilized computation

🕑 13 min • 👤 Thomas Graf • 📆 November 30, 2020 in Discussions • 🏷 syntax, morphology, representations, features, category features, selection, subregular, logical transductions

Okay, show of hands, who still remembers my post on logical transductions from over a month ago? Everyone? Wonderful, then let’s dive into an issue that I’ve been thinking about for a while now. In the post on logical transductions, we saw that the process of rewriting one structure as another can itself be encoded as a structure. Something that we intuitively think of in dynamic terms as a process has been converted into a static representation, like a piece of fossilized computation. Once we look at representations as fossilized computations, the question becomes: what kind of computational fossils are linguistic representations?


Continue reading

Synchronous movement: What could go wrong?

🕑 7 min • 👤 Thomas Graf • 📆 October 12, 2020 in Discussions • 🏷 syntax, movement, Minimalist grammars, subregular

I know I promised you guys a follow-up post on logical transductions and the status of representations, but I just have to get this out first because it’s been gnawing at me for a few weeks now. There’s been some limitations of the subregular view of syntax in terms of movement tiers, and I think I’ve found a solution, one that somehow ends up looking a bit like the system in Beyond Explanatory Adequacy. The thing is, my solution is so simple that I fear I’m missing something very basic, some clear-cut empirical phenomenon that completely undermines my purported solution. So, syntacticians, this is your opportunity to sink my current love child in the comments section…


Continue reading

When parsing isn't about parsing

🕑 7 min • 👤 Thomas Graf • 📆 June 18, 2020 in Discussions • 🏷 syntax, morphology, parsing, formal language theory, movement

As a student I didn’t care much for work on syntactic parsing since I figured all the exciting big-picture stuff is in the specification of possible syntactic structures, not how we infer these structures from strings. It’s a pretty conventional attitude, widely shared by syntacticians and a natural corollary of the competence-performance split — or so it seems. But as so often, what seems plausible and obvious at first glance quickly falls apart when you probe deeper. Even if you don’t care one bit about syntactic processing, parsing questions still have merit because they quickly turn into questions about syntactic architecture. This is best illustrated with a concrete example, in that abstract sense of “concrete” that everyone’s so fond of here at the outdex headquarters.


Continue reading

MR movement: Freezing effects & monotonicity

🕑 8 min • 👤 Thomas Graf • 📆 May 19, 2020 in Discussions • 🏷 syntax, movement, freezing effects, monotonicity

As you might know, I love reanalyzing linguistic phenomena in terms of monotonicity (see this earlier post, my JLM paper, and this NELS paper by my student Sophie Moradi). I’m now in the middle of writing another paper on this topic, and it currently includes a section on freezing effects. You see, freezing effects are obviously just bog-standard monotonicity, and I’m shocked that nobody else has pointed that out before. But perhaps the reason nobody’s pointed that out before is simple: my understanding of freezing effects does not match the facts. In the middle of writing the paper, I realized that I don’t know just how much freezing effects limit movement. So I figured I’d reveal my ignorance to the world and hopefully crowd source some sorely needed insight.


Continue reading

Categorical statements about gradience

🕑 14 min • 👤 Thomas Graf • 📆 April 28, 2020 in Discussions • 🏷 phonology, syntax, algebra, gradience

Omer has a great post on gradience in syntax. I left a comment there that briefly touches on why gradience isn’t really that big of a deal thanks to monoids and semirings. But in a vacuum that remark might not make a lot of sense, so here’s some more background.


Continue reading

Against math: When sets are a bad setup

🕑 11 min • 👤 Thomas Graf • 📆 April 06, 2020 in Discussions • 🏷 methodology, syntax, set theory, Merge, linearization

Last time I gave you a piece of my mind when it comes to the Kuratowski definition of pairs and ordered sets, and why we should stay away from it in linguistics. The thing is, that was a conceptual argument, and those tend to fall flat with most researchers. Just like most mathematicians weren’t particularly fazed by Gödel’s incompleteness results because it didn’t impact their daily work, the average researcher doesn’t care about some impurities in their approach as long as it gets the job done. So this post will discuss a concrete case where a good linguistic insight got buried under mathematical rubble.


Continue reading

Against math: Kuratowski's spectre

🕑 8 min • 👤 Thomas Graf • 📆 March 30, 2020 in Discussions • 🏷 methodology, syntax, set theory, Merge, linearization

As some of you might know, my dissertation starts with a quote from My Little Pony. By Applejack, to be precise, the only pony that I could see myself have a beer with (and I don’t even like beer). You can watch the full clip, but here’s the line that I quoted:

Don’t you use your fancy mathematics to muddy the issue.

Truer words have never been spoken. In light of my obvious mathematical inclinations this might come as a surprise for some of you, but I don’t like using math just for the sake of math. Mathematical formalization is only worth it if it provides novel insights.


Continue reading

Trees for free with tree-free syntax

🕑 5 min • 👤 Thomas Graf • 📆 March 06, 2020 in Discussions • 🏷 syntax, strings, derivation trees, phrase structure trees

Here’s another quick follow-up to the unboundedness argument. As you might recall, that post discussed a very simple model of syntax whose only task it was to adjudicate the well-formedness of a small number of strings. Even for such a limited task, and with such a simple model, it quickly became clear that we need a more modular approach to succinctly capture the facts and state important generalizations. But once we had this more modular perspective, it no longer mattered whether syntax is actually unbounded. Assuming unboundedness, denying unboundedness, it doesn’t matter because the overall nature of the approach does not hinge on whether we incorporate an upper bound on anything. Well, something very similar also happens with another aspect of syntax that is beyond doubt in some communities and highly contentious in others: syntactic trees.


Continue reading

Unboundedness is a red herring

🕑 13 min • 👤 Thomas Graf • 📆 February 20, 2020 in Discussions • 🏷 syntax, methodology, competence, performance

Jon’s post on the overappreciated Marr argument reminded me that it’s been a while since the last entry in the Underappreciated arguments series. And seeing how the competence-performance distinction showed up in the comments section of my post about why semantics should be like parsing, this might be a good time to talk one of the central tenets of this distinction: unboundedness. Unboundedness, and the corollary that natural languages are infinite, is one of the first things that we teach students in a linguistics intro, and it is one of the first things that psychologists and other non-linguists will object to. But the dirty secret is that nothing really hinges on it.


Continue reading

Hey syntax, where's my carrot?

🕑 5 min • 👤 Thomas Graf • 📆 January 31, 2020 in Discussions • 🏷 syntax, textbooks, teaching

Last week I blogged a bit about syntax textbooks. One question I didn’t ask there, for fear of completely derailing the post, is what should actually be in a syntax textbook. There’s a common complaint I hear from students about syntax classes, and it’s that syntax courses are one giant bait-and-switch. They’re right, and it’s also true for syntax textbooks.


Continue reading