Semantics: Corrections and further thoughts

🕑 6 min • 👤 Thomas Graf • 📆 January 08, 2020 in Discussions • 🏷 semantics, donkey sentences, parsing

This is a follow-up to my previous post on semantics. It has been pointed out to me that this post contains several inaccuracies and grave omissions. Some of them are in the summary of Lucas’ talk, and that would probably have been noticed earlier if I had provided a link to the slides or the paper. Thanks to Lucas for sending me those by email and for walking me through the account again. I’ll briefly explain some of the misleading points later on in this post.

But the much bigger issue is that I failed to point out that Lucas wasn’t just presenting his own work. He made it very, very clear that this was joint work with Dylan Blumford (UCLA) and Robert Henderson (UArizona). I’m really upset with myself about that one, in some sense giving partial credit is even worse than giving no credit at all, and the latter is already a dick move. My sincerest apologies to Dylan and Robert.

If I had run the post past Lucas before publishing it, a lot of this could have been avoided, so I’ll make that a priority for future posts that talk about work that I’m not well-acquainted with. Alright, so let’s talk a bit what I got wrong and how that affects the central message of the previous post.


Continue reading

Semantics should be like parsing

🕑 5 min • 👤 Thomas Graf • 📆 December 28, 2019 in Discussions • 🏷 semantics, donkey sentences, parsing

I spent a few days before Christmas at the Amsterdam colloquium, which exposed me to a much heavier dose of semantics than I’m used to. I’ve always had a difficult relation with semantics. On the one hand I like that it has its fair share of KISS theories, and generalized quantifier theory is aesthetically very pleasing to me. On the other hand most of semantics is pretty dull, and I think that’s because semanticists put way too much stuff in their theories that has nothing to do with natural language semantics. I’ve previously had a hard time putting this into concrete terms, but Lucas Champollion’s invited talk on donkey sentences finally presented me with a specific example.


Continue reading

KISSing semantics: Subregular complexity of quantifiers

🕑 9 min • 👤 Thomas Graf • 📆 July 26, 2019 in Discussions • 🏷 subregular, strictly local, tier-based strictly local, monotonicity, quantifiers, semantics, typology

I promised, and you shall receive: a KISS account of a particular aspect of semantics. Remember, KISS means that the account covers a very narrowly circumscribed phenomenon, makes no attempt to integrate with other theories, and instead aims for being maximal simple and self-contained. And now for the actual problem:

It has been noted before that not every logically conceivable quantifier can be realized by a single “word”. Those are very deliberate scare quotes around word as that isn’t quite the right notion — if it can even be defined. But let’s ignore that for now and focus just on the basic facts. We have every for the universal quantifier \(\forall\), some for the existential quantifier \(\exists\), and no, which corresponds to \(\neg \exists\). English is not an outlier, these three quantifiers are very common across languages. But there seems to be no language with a single word for not all, i.e. \(\neg \forall\). Now why the heck is that? If language is fine with stuffing \(\neg \exists\) into a single word, why not \(\neg \forall\)? Would you be shocked if I told you the answer is monotonicity? Actually, the full answer is monotonicity + subregularity, but one thing at a time.


Continue reading

I'm not done KISSing yet

🕑 3 min • 👤 Thomas Graf • 📆 July 25, 2019 in Discussions • 🏷 methodology, semantics

I just got back from MOL (Mathematics of Language) in Toronto, and much to my own surprise I actually got to talk some more about KISS theories there. As you might recall, my last post tried to made a case for more simple accounts that try to handle only one phenomenon but do so exceedingly well without the burden of machinery that is needed for other phenomena. My post only listed two examples for syntax as I was under the impression that this is a rare approach in linguistics, so I didn’t dig much deeper for examples. But at MOL I saw Yoad Winter give a beautiful KISS account of presupposition projection (here’s the paper version). That’s when it hit me: in semantics, KISS is pretty much the norm!


Continue reading

The anti anti missile missile argument argument

🕑 7 min • 👤 Thomas Graf • 📆 June 21, 2019 in Discussions • 🏷 formal language theory, generative capacity, morphology, semantics

Computational linguists overall agree that morphology, with the exception of reduplication, is regular. Here regular is meant in the sense of formal language theory. For any given natural language, the set of well-formed surface forms is a regular string set, which means that it is recognized by a finite-state automaton, definable in monadic second-order logic, a projection of a strictly 2-local string set, has a right congruence relation of finite index, yada yada yada. There’s a million ways to characterize regularity, but the bottom line is that morphology defines string sets of fairly limited complexity. The mapping from underlying representations to surface forms is also very limited as everything (again modulo reduplication) can be handled by non-deterministic finite-state transducers. It’s a pretty nifty picture, though somewhat loose in my subregular eyes that immediately pick up on all the regular things you don’t find in morphology. Still, it’s a valuable result that provides a rough approximation of what morphology is capable of; a decent starting point for further inquiry. However, there is one empirical argument that is inevitably brought up whenever I talk about the regularity of morphology. It’s like an undead abomination that keeps rising from the grave, and today I’m here to hose it down with holy water.


Continue reading