Representations as fossilized computation

🕑 13 min • 👤 Thomas Graf • 📆 November 30, 2020 in Discussions • 🏷 syntax, morphology, representations, features, category features, selection, subregular, logical transductions

Okay, show of hands, who still remembers my post on logical transductions from over a month ago? Everyone? Wonderful, then let’s dive into an issue that I’ve been thinking about for a while now. In the post on logical transductions, we saw that the process of rewriting one structure as another can itself be encoded as a structure. Something that we intuitively think of in dynamic terms as a process has been converted into a static representation, like a piece of fossilized computation. Once we look at representations as fossilized computations, the question becomes: what kind of computational fossils are linguistic representations?


Continue reading

Overappreciated arguments: Marr's three levels

🕑 11 min • 👤 Jon Rawski • 📆 January 12, 2020 in Discussions • 🏷 neuroscience, representations, Marr

<TG soapbox> The following is a guest post by Jon Rawski. I’m ruthlessly abusing my editorial powers to insert this reminder that the Outdex welcomes guest posts. Maybe you’d like to start a discussion on a topic that’s dear to your heart? Or maybe you have something to add to an ongoing discussion that can’t be fit in the comments section because it’s too long and involves multiple pictures and figures? Just send me your post in some editable format (not PDF) and I’ll try to post it asap. If you want to reduce the time from sending to posting, check the instructions on how to reduce the editorial load for me. Anyways, enough of my blabbering, let’s hear what Jon has to say… <TG soapbox/>

To spice up the Underappreciated Arguments series, I thought I’d describe a rhetorical chestnut beloved by many a linguist: Marr’s Three Levels. Anyone who has taken a linguistics class that dips a toe into the cognitive pool has heard of the Three Levels argument. It’s so ubiquitous that it’s been memed by grad students.


Continue reading

Features and the power of representations

🕑 13 min • 👤 Thomas Graf • 📆 June 06, 2019 in Discussions • 🏷 features, constraints, representations, generative capacity, subregular, strictly local, transductions

As you might have gleaned from my previous post, I’m not too fond of features, but I haven’t really given you a reason for that. It is actually straight-forward: features lower complexity. By itself, that is actually a useful property. Trees lower the complexity of syntax, and nobody (or barely anybody) uses that as an argument that we should use strings. Distributing the workload between representations and operations/constraints over these representations is considered a good thing. Rightfully so, because factorization is generally a good idea.

But there is a crucial difference between trees and features. We actually have models of how trees are constructed from strings — you might have heard of them, they’re called parsers. And we have some ways of measuring the complexity of this process, e.g. asymptotic worst-case complexity. We lack a comparable theory for features. We’re using an enriched representation without paying attention to the computational cost of carrying out this enrichment. That’s no good, we’re just cheating ourselves in this case. Fortunately, listening to people talk about features for 48h at the workshop gave me an epiphany, and I’m here to share it with you.


Continue reading