MR movement: Freezing effects & monotonicity

🕑 8 min • 👤 Thomas Graf • 📆 May 19, 2020 in Discussions • 🏷 syntax, movement, freezing effects, monotonicity

As you might know, I love reanalyzing linguistic phenomena in terms of monotonicity (see this earlier post, my JLM paper, and this NELS paper by my student Sophie Moradi). I’m now in the middle of writing another paper on this topic, and it currently includes a section on freezing effects. You see, freezing effects are obviously just bog-standard monotonicity, and I’m shocked that nobody else has pointed that out before. But perhaps the reason nobody’s pointed that out before is simple: my understanding of freezing effects does not match the facts. In the middle of writing the paper, I realized that I don’t know just how much freezing effects limit movement. So I figured I’d reveal my ignorance to the world and hopefully crowd source some sorely needed insight.


Continue reading

Martian substructures

🕑 2 min • 👤 Thomas Graf • 📆 May 06, 2020 in Discussions • 🏷 fun allowed

Sometimes students get hung up on the difference between substring and subsequence. But the works of Edgar Rice Burroughs have given me an idea for an exercise that might just be silly enough to permanently edge itself into students’ memory.


Continue reading

Categorical statements about gradience

🕑 14 min • 👤 Thomas Graf • 📆 April 28, 2020 in Discussions • 🏷 phonology, syntax, algebra, gradience

Omer has a great post on gradience in syntax. I left a comment there that briefly touches on why gradience isn’t really that big of a deal thanks to monoids and semirings. But in a vacuum that remark might not make a lot of sense, so here’s some more background.


Continue reading

Just your regular regular expression

🕑 6 min • 👤 Thomas Graf • 📆 April 24, 2020 in Discussions • 🏷 coding, fun allowed, methodology

Outdex posts can be a dull affair, always obsessed with language and computation (it’s the official blog motto, you know). Today, I will deviate from this with a post that’s obsessed with, wait for it, computation and language. Big difference. Our juicy topic will be regular expressions. And don’t you worry, we’ll get to the “and language” part.


Continue reading

Against math: When sets are a bad setup

🕑 11 min • 👤 Thomas Graf • 📆 April 06, 2020 in Discussions • 🏷 methodology, syntax, set theory, Merge, linearization

Last time I gave you a piece of my mind when it comes to the Kuratowski definition of pairs and ordered sets, and why we should stay away from it in linguistics. The thing is, that was a conceptual argument, and those tend to fall flat with most researchers. Just like most mathematicians weren’t particularly fazed by Gödel’s incompleteness results because it didn’t impact their daily work, the average researcher doesn’t care about some impurities in their approach as long as it gets the job done. So this post will discuss a concrete case where a good linguistic insight got buried under mathematical rubble.


Continue reading

Against math: Kuratowski's spectre

🕑 8 min • 👤 Thomas Graf • 📆 March 30, 2020 in Discussions • 🏷 methodology, syntax, set theory, Merge, linearization

As some of you might know, my dissertation starts with a quote from My Little Pony. By Applejack, to be precise, the only pony that I could see myself have a beer with (and I don’t even like beer). You can watch the full clip, but here’s the line that I quoted:

Don’t you use your fancy mathematics to muddy the issue.

Truer words have never been spoken. In light of my obvious mathematical inclinations this might come as a surprise for some of you, but I don’t like using math just for the sake of math. Mathematical formalization is only worth it if it provides novel insights.


Continue reading

"Star-Free Regular Languages and Logic" at KWRegan's Blog

🕑 1 min • 👤 Jeffrey Heinz • 📆 March 23, 2020 in Discussions • 🏷 math, logic, formal languages

Bill Idsardi brought this to my attention. Enjoy your reading!

Star-Free Regular Languages and Logic

on the Gödel’s Lost Letter and P=NP blog.


Continue reading

Trees for free with tree-free syntax

🕑 5 min • 👤 Thomas Graf • 📆 March 06, 2020 in Discussions • 🏷 syntax, strings, derivation trees, phrase structure trees

Here’s another quick follow-up to the unboundedness argument. As you might recall, that post discussed a very simple model of syntax whose only task it was to adjudicate the well-formedness of a small number of strings. Even for such a limited task, and with such a simple model, it quickly became clear that we need a more modular approach to succinctly capture the facts and state important generalizations. But once we had this more modular perspective, it no longer mattered whether syntax is actually unbounded. Assuming unboundedness, denying unboundedness, it doesn’t matter because the overall nature of the approach does not hinge on whether we incorporate an upper bound on anything. Well, something very similar also happens with another aspect of syntax that is beyond doubt in some communities and highly contentious in others: syntactic trees.


Continue reading

Unboundedness, learning, and POS

🕑 6 min • 👤 Thomas Graf • 📆 February 26, 2020 in Discussions • 🏷 learnability, poverty of stimulus, lattices

Ignas Rudaitis left a comment under my unboundedness post that touches on an important issue: the interaction of unboundedness and the poverty of the stimulus (POS). My reply there had to be on the short side, so I figured I’d fill in the gaps with a short follow-up post.


Continue reading

Unboundedness is a red herring

🕑 13 min • 👤 Thomas Graf • 📆 February 20, 2020 in Discussions • 🏷 syntax, methodology, competence, performance

Jon’s post on the overappreciated Marr argument reminded me that it’s been a while since the last entry in the Underappreciated arguments series. And seeing how the competence-performance distinction showed up in the comments section of my post about why semantics should be like parsing, this might be a good time to talk one of the central tenets of this distinction: unboundedness. Unboundedness, and the corollary that natural languages are infinite, is one of the first things that we teach students in a linguistics intro, and it is one of the first things that psychologists and other non-linguists will object to. But the dirty secret is that nothing really hinges on it.


Continue reading