Who watches the NEG-raisers?

🕑 2 min • 👤 Thomas Graf • 📆 June 13, 2019 in Discussions • 🏷 fun allowed, negative concord, NEG raising

I reread Alan Moore’s Watchmen today. Still amazing, not one bit overrated, and whenever I pick it up I can’t help but finish it in one sitting. But did you know that Watchmen actually challenges the very foundations of syntactic theory?


Continue reading

Some observations on privative features

🕑 9 min • 👤 Thomas Graf • 📆 June 11, 2019 in Discussions • 🏷 features, privativity, phonology, syntax

One topic that came up at the feature workshop is whether features are privative or binary (aka equipollent). Among mathematical linguists it’s part of the general folklore that there is no meaningful distinction between the two. Translating from a privative feature specification to a binary one is trivial. If we have three features \(f\), \(g\), and \(h\), then the privative bundle \(\{f, g\}\) is equivalent to \([+f, +g, -h]\). In the other direction, we can make binary features privative by simply interpreting the \(+\)/\(-\) as part of the feature name. That is to say, \(-f\) isn’t a feature \(f\) with value \(-\), it’s simply the privative feature \(\text{minus} f\). Some arguments add a bit of sophistication to this, e.g. the Boolean algebra perspective in Keenan & Moss’s textbook Mathematical Structures in Language. So far so good unsatisfactory.


Continue reading

Surprising theorems

🕑 4 min • 👤 Thomas Graf • 📆 June 08, 2019 in Discussions • 🏷 history, literature, formal language theory

Time for a quick break from the on-going feature saga. A recent post on the Computational Complexity blog laments that theorems in complexity theory have become predictable. Even when a hard problem is finally solved after decades of research, the answer usually goes in the expected direction. Gone are the days of results that come completely out of left field. This got me thinking if mathematical linguistics still has surprising theorems to offer.


Continue reading

Features and the power of representations

🕑 13 min • 👤 Thomas Graf • 📆 June 06, 2019 in Discussions • 🏷 features, constraints, representations, generative capacity, subregular, strictly local, transductions

As you might have gleaned from my previous post, I’m not too fond of features, but I haven’t really given you a reason for that. It is actually straight-forward: features lower complexity. By itself, that is actually a useful property. Trees lower the complexity of syntax, and nobody (or barely anybody) uses that as an argument that we should use strings. Distributing the workload between representations and operations/constraints over these representations is considered a good thing. Rightfully so, because factorization is generally a good idea.

But there is a crucial difference between trees and features. We actually have models of how trees are constructed from strings — you might have heard of them, they’re called parsers. And we have some ways of measuring the complexity of this process, e.g. asymptotic worst-case complexity. We lack a comparable theory for features. We’re using an enriched representation without paying attention to the computational cost of carrying out this enrichment. That’s no good, we’re just cheating ourselves in this case. Fortunately, listening to people talk about features for 48h at the workshop gave me an epiphany, and I’m here to share it with you.


Continue reading

Omnivorous number and Kiowa inverse marking: Monotonicity trumps features?

🕑 10 min • 👤 Thomas Graf • 📆 May 31, 2019 in Discussions • 🏷 features, monotonicity, morphosyntax, hierarchies, omnivorous number, inverse marking, Kiowa

I just came back from a workshop in Tromsø on syntactic features, organized by Peter Svenonius and Craig Sailor — thanks for the invitation, folks! Besides yours truly, the invited speakers were Susana Béjar, Daniel Harbour, Michelle Sheehan, and Omer Preminger. I think it was a very interesting and productive meeting with plenty of fun. We got along really well, like a Justice League of feature research (but who’s Aquaman?).

In the next few weeks I’ll post on various topics that came up during the workshop, in particular privative features. But for now, I’d like to comment on one particular issue that regards the feature representation of number and how it matters for omnivorous number and Kiowa inverse marking. Peter has an excellent write-up on his blog, and I suggest that the main discussion about features should be kept there. This post will present a very different point of view that basically says “suck it, features!” and instead uses hierarchies and monotonicity.


Continue reading

Underappreciated arguments: Underlying representations

🕑 4 min • 👤 Thomas Graf • 📆 May 28, 2019 in Discussions • 🏷 phonology, morphology, underlying representations, abstractness, bimorphisms, T-model

Time for another entry in the Underappreciated arguments series. This post will be pretty short as it is a direct continuation of the previous entry on how the inverted T-model emerges naturally from the bimorphism perspective. You see, the very same argument also gives rise to underlying representations in phonology and morphology.


Continue reading

Beeing a linguist

🕑 1 min • 👤 Thomas Graf • 📆 May 22, 2019 in Discussions • 🏷 fun allowed

Continuing yesterday’s theme of having fun, here’s a highly, highly accurate typology of our field in picture form.


Continue reading

A song of middles and suffixes

🕑 3 min • 👤 Thomas Graf • 📆 May 21, 2019 in Discussions • 🏷 phonology, subregular, strictly local, fun allowed

Am I the only one who’s worn out by the total lack of fun and playfulness in all public matters? Everything is serious business, everything is one word away from a shit storm, everybody has to be proper and professional all the time, no fun allowed. It’s the decade of buzzkills, killjoys, and sourpusses. Linguistics is very much in line with this unfortunate trend. Gone are the days of Norbert Hornstein dressing up as Nim Chimpsky. It is unthinkable to publish a paper under the pseudonym Quang Phuc Dong (that’s at least a micro-aggression, if not worse). Even a tongue-in-cheek post on Faculty of Language is immediately under suspicion of being dismissive. Should have added that /s tag to spell it out.

Compared to other fields, linguistics has never been all that playful, perhaps because we’re already afraid of not being taken seriously by other fields. But we’ve had one proud torch bearer in this respect: Geoff Pullum. His Topic… comment column should be mandatory grad school reading. Formal Linguistics Meets the Boojum is a classic for the ages (and did, of course, get a very proper and professional response). My personal favorite is his poetic take on the Halting problem. So I figured instead of complaining I’d lead by example and inject some fun of my own. To be honest, I’m probably better at complaining, but here we go…


Continue reading

Two dimensions of talking past one another

🕑 8 min • 👤 Thomas Graf • 📆 May 20, 2019 in Discussions • 🏷 theory, methodology, Minimalism

This post will be a bit of a mess since it’s mostly me trying to systematize some issues I’ve been grappling with for a while now. I have discussed my research with people who come from very different backgrounds: theoretical linguists, computer scientists, molecular biologists, physicists, and so on. Many of these discussions have involved a fair amount of talking past one another. To some extent that’s unavoidable without a large shared common ground. But ironically, most of the talking past one another actually didn’t occur with, say, biologists, but with theoretical linguists, in particular Minimalists. The rest of this post presents my personal explanation for what might be going on there. I believe there are two factors at play. Both concern the horribly fuzzy notion of a linguistic theory.


Continue reading

Underappreciated arguments: The inverted T-model

🕑 9 min • 👤 Thomas Graf • 📆 May 15, 2019 in Discussions • 🏷 syntax, transductions, bimorphisms, T-model

There’s many conceptual pillars of linguistics that are, for one reason or another, considered contentious outside the field. This includes the competence/performance split, the grammar/parser dichotomy, underlying representations, or the inverted T-model. These topics have been discussed to death, but they keep coming up. Since it’s tiring to hear the same arguments over and over again, I figure it’d be interesting to discuss some little known ones that are rooted in computational linguistics. This will be an ongoing series, and its inaugural entry is on the connection between the T-model and bimorphisms.


Continue reading