Representations as fossilized computation

🕑 13 min • 👤 Thomas Graf • 📆 November 30, 2020 in Discussions • 🏷 syntax, morphology, representations, features, category features, selection, subregular, logical transductions

Okay, show of hands, who still remembers my post on logical transductions from over a month ago? Everyone? Wonderful, then let’s dive into an issue that I’ve been thinking about for a while now. In the post on logical transductions, we saw that the process of rewriting one structure as another can itself be encoded as a structure. Something that we intuitively think of in dynamic terms as a process has been converted into a static representation, like a piece of fossilized computation. Once we look at representations as fossilized computations, the question becomes: what kind of computational fossils are linguistic representations?


Continue reading

More observations on privative features

🕑 7 min • 👤 Thomas Graf • 📆 June 17, 2019 in Discussions • 🏷 features, privativity, phonology, syntax, transductions

In an earlier post I looked at privativity in the domain of feature sets: given a collection of features, what conditions must be met by their extensions in order for these features to qualify as privative. But that post concluded with the observation that looking at the features in isolation might be a case of the dog barking up the wrong tree. Features are rarely of interest on their own, what matters is how they interact with the rest of the grammatical machinery. This is the step from a feature set to a feature system. Naively, one might expect that a privative feature set gives rise to a privative feature system. But that’s not at all the case. The reason for that is easy to explain yet difficult to fix.


Continue reading

Some observations on privative features

🕑 9 min • 👤 Thomas Graf • 📆 June 11, 2019 in Discussions • 🏷 features, privativity, phonology, syntax

One topic that came up at the feature workshop is whether features are privative or binary (aka equipollent). Among mathematical linguists it’s part of the general folklore that there is no meaningful distinction between the two. Translating from a privative feature specification to a binary one is trivial. If we have three features \(f\), \(g\), and \(h\), then the privative bundle \(\{f, g\}\) is equivalent to \([+f, +g, -h]\). In the other direction, we can make binary features privative by simply interpreting the \(+\)/\(-\) as part of the feature name. That is to say, \(-f\) isn’t a feature \(f\) with value \(-\), it’s simply the privative feature \(\text{minus} f\). Some arguments add a bit of sophistication to this, e.g. the Boolean algebra perspective in Keenan & Moss’s textbook Mathematical Structures in Language. So far so good unsatisfactory.


Continue reading

Features and the power of representations

🕑 13 min • 👤 Thomas Graf • 📆 June 06, 2019 in Discussions • 🏷 features, constraints, representations, generative capacity, subregular, strictly local, transductions

As you might have gleaned from my previous post, I’m not too fond of features, but I haven’t really given you a reason for that. It is actually straight-forward: features lower complexity. By itself, that is actually a useful property. Trees lower the complexity of syntax, and nobody (or barely anybody) uses that as an argument that we should use strings. Distributing the workload between representations and operations/constraints over these representations is considered a good thing. Rightfully so, because factorization is generally a good idea.

But there is a crucial difference between trees and features. We actually have models of how trees are constructed from strings — you might have heard of them, they’re called parsers. And we have some ways of measuring the complexity of this process, e.g. asymptotic worst-case complexity. We lack a comparable theory for features. We’re using an enriched representation without paying attention to the computational cost of carrying out this enrichment. That’s no good, we’re just cheating ourselves in this case. Fortunately, listening to people talk about features for 48h at the workshop gave me an epiphany, and I’m here to share it with you.


Continue reading

Omnivorous number and Kiowa inverse marking: Monotonicity trumps features?

🕑 10 min • 👤 Thomas Graf • 📆 May 31, 2019 in Discussions • 🏷 features, monotonicity, morphosyntax, hierarchies, omnivorous number, inverse marking, Kiowa

I just came back from a workshop in Tromsø on syntactic features, organized by Peter Svenonius and Craig Sailor — thanks for the invitation, folks! Besides yours truly, the invited speakers were Susana Béjar, Daniel Harbour, Michelle Sheehan, and Omer Preminger. I think it was a very interesting and productive meeting with plenty of fun. We got along really well, like a Justice League of feature research (but who’s Aquaman?).

In the next few weeks I’ll post on various topics that came up during the workshop, in particular privative features. But for now, I’d like to comment on one particular issue that regards the feature representation of number and how it matters for omnivorous number and Kiowa inverse marking. Peter has an excellent write-up on his blog, and I suggest that the main discussion about features should be kept there. This post will present a very different point of view that basically says “suck it, features!” and instead uses hierarchies and monotonicity.


Continue reading